You are on page 1of 47

Computer Architecture and Microprocessor

UNIT I
1.0 Introduction

A computer understands information composed of only zeros and ones. Therefore, the computer in the form of 0s and 1s processes instructions and data. We are familiar with the decimal number system in which digits are 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. The decimal number system is convenient for the programmer. The computer uses binary digits for its operation. In the binary system there are only two digits 0 and 1. The programmer feeds instructions and data in mnemonics and decimal digits. But for the operation of the computer these are converted to binary bits. This chapter deals with the conversion of binary numbers to decimal numbers and vice versa. It also deals with hexadecimal and octal system. Computer circuitries are usually designed to process hexadecimal or octal number.

1.1

Objective

This lesson provides the skill about number systems, computer languages. The content of this lesson starts with number systems including decimal number system, representation of negative numbers and hexadecimal numbers. It completely deals the core concepts of Boolean algebra, registers, counters and logic circuits. Towards end of this lesson student can understand the basic idea about the internals of the computer system and the programming language fundamentals.

1.2
1.2.1

Content
Number System

Decimal Number System We use decimal number system in everyday work. There are ten digits 0 to 9. The base of the decimal number system is 10. The following example will explain the base and the value of the each digit of a decimal number. Example 4598 = 4000 + 500 + 90 + 8 = 4 x 103 + 5 x 102 + 9 x 101 + 8 x 100 The value of each digit depends on its position in the number as described below: The value of the 1st digit of the number from the right side = 1st digit x 100 The value of the 2nd digit of the number from the right side = 2nd digit x 101 The value of the 3rd digit of the number from the right side = 3rd digit x 102 The value of the 4th digit of the number from the right side = 4th digit x 103

Page 1

Computer Architecture and Microprocessor


The value of the nth digit of the number from the right side = nth digit x 10n-1 = nth digit x (base)n-1. Binary Number System In the binary number system there are only two digits 0 and 1. The binary digits are called bits. The base of the binary number system is 2. In the decimal system there is no difficulty in representing numbers up to 9. For ten there is no symbol or digit and hence 10 is written. Again, after 99 we have to write 100. Similarly, in binary system zero is represented by 0 and one by 1. After this there is no digit for two. Therefore, two is written as 10 and three as 11. Again for four we have to write 100. In this way we proceed further. Thus we see that a binary number becomes very long and cumbersome. The following example will illustrate the base and weight of each digit in a binary number. Example 1011 (binary number) = 1 x 23 + 0 x 22 + 1 x 21 + 1 x 20 = 8 + 0 + 2 + 1 = 11 (decimal number) The weight of each bit of a number depends on its position as described below : The value of the 1st bit of the number from the right side = 1st bit x 20 The value of the 2nd bit of the number from the right side = 2nd bit x 21 The value of the 3rd bit of the number from the right side = 3rd bit x 22 The value of the 4th bit of the number from the right side = 4th bit x 23 The value of the nth bit of the number from the right side = nth bit x 2n-1 = nth bit x (base)n-1 Thus we see that rules for decimal and binary numbers are exactly same. The value of a digit in a number depends on the base and its relative position in the number. Table 1.1. Decimal to Binary Conversion ____________________________________________________________ Decimal Number Binary Number ____________________________________________________________ 0 0 1 1 2 10 3 11 4 100 5 101 6 110 7 111 8 1000 9 1001 10 1010 11 1011 12 1100 13 1101

Page 2

Computer Architecture and Microprocessor


14 1110 15 1111 16 10000 ___________________________________________________________ _ 1.2.2 Representation of Negative Number

The 2s complement is used to represent the negative of a binary number. Before understanding 2s complement in the binary number system let us understand 9s and 10s complement in the decimal number system. 9s complement. To obtain the 9s complement of a decimal number each digit of the number is subtracted from 9. For example, the 9s complement of 45 is (99-45) = 54. The 9s complement of 523 = (999 523) = 476. 10s complement. The 10s complement of a decimal number = its 9s complement + 1. The 10s complement of 45 = 54 + 1 = 55. The 10s complement of 523 = 476 + 1 = 477. Now let us add the number and its 10s complement and examine the sum. Example 1 + 4 5 0 5 5 0 Carry =1 (decimal number) (its 10s complement)

In this case the decimal number is of two digits. If we consider the sum of the number and its 10s complement only up to two digits, it is zero. In other words we are neglecting the carry of the last stage. Example 2 5 2 3 (decimal number) + 4 7 7 (its 10s complement) ________________________________________ 0 0 0 Carry = 1 The decimal number 523 is of three digits. If we consider the sum of the number and its 10s complement only up to three digits, it is zero. Thus we see that the 10s complement gives the negative value of the number The 10s complement of a decimal number = - decimal number.

Page 3

Computer Architecture and Microprocessor

1s Complement. The 1s complement in the binary number system is similar to the 9s complement in the decimal system. The 1s complement of a binary number is obtained by subtracting each bit of the number from 1. The 1s complement of 01 is 10. The 1s complement is 111 is 000. Thus we see that the 1s complement of a binary number can be obtained by simply changing each bit 1 to 0 and 0 to 1. Example 1. Find 1s complement of 100110. The 1s complement of 100110 = 011001. Example 2 Find 1s complement of 0000 The 1s complement of 0000 = 1111. Example 3 Find 1s complement of 11111. The 1s complement of 11111 = 00000. 2s Complement. The 2s complement in the binary number system is similar to the 10s complement in the decimal number system. The 2s complement of binary number = its 1s complement + 1. Example 1. Find 2s complement of 10011. The 2s complement of 10011 = 01100 + 1 = 01101. Example 2. Find 2s complement of 111. The 2s complement of 111 = 000 + 1 = 001. Example 3. Find 2s complement of 0000. The 2s complement of 0000 = 1111 + 1 = 0000. Now let us examine the sum when a binary number is added to its 2s complement. Example 1. Binary number = 1001 Its 1s complement = 0110 Its 2s complement = 0110 + 1 = 0111. Number + its 2s complement

= 1 0 0 + 0 1 1 __________________ 0 0 0 Carry

1 1 0

Page 4

Computer Architecture and Microprocessor


The last carry is lost if the processor is of 4 bits or sum is considered only 4 bits. For 8-bit processor the number and 2s complement will be written in 8 bits as given below: = 00001001 = 11110110 = 11110110 + 1 = 11110111 Number = 00001001 + 2s complement = 11110111 __________________ 00000000 carry The last carry will be neglected, if the 8 bit sum is considered. Example 2 + 4 (decimal) = -4 = Its 2s complement +4 -4 00000100 (binary) = 11111011 + 1 = 11111100 = 00000100 = 11111100 ______________________ 00000000 Number 1s complement 2s complement

Example 3 Add + 5 and 7 7 -7 = 00000111 = 11111000 + 1 = 11111001 +5 = 00000101 -7 = 11111001 ____________________________ -2 = 11111110 Check: = 00000010 = 11111101 + 1 = 11111110 Thus we see that the 2s complement of a binary number represents its negative. Binary Subtraction Using 2s Complement. The addition of 2s complement of a number is equivalent to the subtraction of the number. Suppose, we want to subtract 0010 ( 2 decimal) from 0101 ( 5 decimal ). If the 2 -2

Page 5

Computer Architecture and Microprocessor


2s complement of 0010 (2 decimal) is added to 0101 (5 decimal) the sum will be 0011 (3 decimal). It is equal to 0101 (5 decimal) 0010 (2 decimal) = 0011 (3 decimal). Ordinary binary subtraction: 0101 (5 decimal) - 0010 (-2 decimal) ____________________ 0011 (3 decimal) Subtraction using 2s complement: 1s complement of 0010 (2 decimal) = 1101 2s complement of 0010 = 1101 + 1 = 1110 0101 (5 decimal) + 1110 ( + 2s complement of 2) ______________________________________ 0011 (3 decimal) The carry of the last stage is neglected. 1.2.3 2s Complement representation of numbers

The 2s complement of a binary number x, which has n bits, is given by (2n-x). Consider the number +3 whose binary representation is 011. It has three bits. Hence the 2s complement of +3 is (23-3), which is 5. The binary representation of 5 is 101. Thus, 101 is the 2s complement of 011. In a computer, all numbers are represented in a uniform fashion using a fixed number of bits. Thus, for an n-bit machine, the range of numbers it can handle is 0 to 2 n-1. For simplicity, consider a 4-bit machine. Sixteen numbers (0 to 15) can normally be represented using these four bits. Now we devise a new scheme of representing negative numbers as follows. We use the first seven combinations of bits for representing positive numbers one to seven. We reserve seven of the remaining combinations for representing negative numbers 1 to 7. Thus we have divided the total range into two parts, 0 and 8 being common to the two halves. Now we restrict ourselves to the use of numbers having a maximum magnitude of seven. In this way we can represent both positive and negative numbers as indicated in Fig. 1.1. With reference to the figure we see that the code 1111, which normally represents 15, is assigned to 1. Similarly the binary equivalent of 14 is assigned to 2. Since 15 = 24-1, 14 = 24 2, etc., this is called the 2s complement representation of 1,2, etc. In general the 2s complement representation (n1) bits represent the magnitude. There is a simple procedure to obtain the 2s complement of a binary number. We first complement each bit of the number (i.e., replace 1 by 0 and 0 by 1). Now we add a 1 to the number. For example, consider the number 5 whose binary representation is

Page 6

Computer Architecture and Microprocessor


0101. Bit complementation yields 1010. Now adding a 1 to this number gives 1011 which is the 2s complement representation for 5. Yet another method of obtaining the 2s complement of a binary number is to scan the number from right to left and complement all bits appearing after the first appearance of a 1. For example the 2s complement of 0010 is 1110 and that of 0011 is 1101.

Fig. 1.1. Twos complement representation of numbers 1.2.4 Binary Coded Decimal (BCD)

In BCD code a decimal digit is represented by four binary bits. For example, 5 is represented by 0101. If there are two or more than two digits in a decimal number, each decimal digit is represented by four binary bits. For example, 93 is represented by 1001 0011. The decimal number 2354 is represented by 0010 0011 0101 0100. Several BCD codes are there such as standard BCD; 2, 4, 2, 1 BCD code; excess 3 BCD code etc. In the standard BCD code the weights of the binary bits are 8, 4, 2, 1. These weights are same as those in ordinary binary number. The standard BCD does not represent 1010 to 1111, i.e. 10 to 15 (decimal). 10 in BCD form is represented by 00010000 and 15 by 00010101. BCD codes are widely used with instruments and calculators. Table 1.2 gives the BCD representation of decimal numbers from 0 to 9. Table 1.3 shows the BCD representation of two digit decimal numbers. Gray Code. The reflected binary or Gray Code is shown in Table 1.4. In this code only one bit changes in the transition from one number to the next higher number. The Gray code is used in shaft encoder which is to indicate the angular position of a shaft. The use of Gray

Page 7

Computer Architecture and Microprocessor


code reduces errors. Suppose that the present position of the shaft is indicated by Gray code 0100 which is for 7. If the position changes to 8, the Gray code will be 1100. If the new change in binary bit is not picked up by the detector the encoder will show the previous position i.e., 7. But in case of ordinary binary code 7 is represented by 0111 and 8 by 1000. Now suppose the detector picks up the least significant bits i.e., 000, but fails to pick up the most significant bit 1, the output will be 0000 instead of 1000. So there is a large error if a simple binary code is used for a shaft encoder. Table 1.2 Decimal to Standard BCD Conversion _________________________________________________________________ _ Decimal Number Standard BCD Number __________________________________________________________________ 0 0000 1 0001 2 0010 3 0011 4 0100 5 0101 6 0110 7 0111 8 1000 9 1001 _________________________________________________________________ _ Table 1.3. Decimal to Standard BCD Conversion _________________________________________________________________ _ Decimal Number Standard BCD Number __________________________________________________________________ 10 00010000 11 00010001 12 00010010 13 00010011 14 00010100 15 00010101 30 00110000 45 01000101 64 01100100 73 01110011 98 10011000

Page 8

Computer Architecture and Microprocessor


__________________________________________________________________ Table 1.4. Gray Code _________________________________________________________________ _ Decimal Binary Code Gray Code _________________________________________________________________ _ 0 0000 0000 1 0001 0001 2 0010 0011 3 0011 0010 4 0100 0110 5 0101 0111 6 0110 0101 7 0111 0100 8 1000 1100 9 1001 1101 10 1010 1111 11 1011 1110 12 1100 1010 13 1101 1011 14 1110 1001 15 1111 1000 _________________________________________________________________ _ 1.2.5 Hexadecimal Number System

The base of the hexadecimal number system is 16. The digits from 0 to 9 are same as those of the decimal number system. In the hexadecimal system 10 is represented by A, 11 by B, 12 by C, 13 by D, 14 by E and 15 by F. The decimal number 16 is represented by 10 in the hexadecimal system; 17 by 11; 18 by 12; 32 by 20; 33 by 21 and so on. A hexadecimal digit is represented by four binary bits. For example, 6 is represented by 0110, B is represented by 1011 and F by 1111. If there are two or more than two digits in a hexadecimal number, each hexadecimal digit is represented by four binary bits. For example, 98 is represented by 1001 1005; 5C by 0101 1100; 3AFB by 0011 1010 1111 1011. The Hexadecimal system utilizes the full capacity of four binary bits. The BCD system does not utilize the full capacity of four binary bits which represent a decimal digit. In the hexadecimal system an 8-bit word can represent up to 11111111 or 255 (decimal) whereas in BCD only up to 10011001 or 99 (decimal). Thus the hardware cost

Page 9

Computer Architecture and Microprocessor


in the hexadecimal system is reduced. The BCD system has the advantage of simplicity in getting the output in the decimal system. Table 1.5 and Table 1.6 show decimal, hexadecimal and binary representations. Conversion of a Hexadecimal Number to Decimal Number. The following examples illustrate the conversion of hexadecimal number to decimal number. Example 1. Convert the hexadecimal number 5B to its decimal equivalent. B is the 1st digit from the right side; its weight is B x 160 5 is the 2nd digit from the right side; its weight is 5 x 161 5B (hexadecimal) = 5 x 161 + B x 160 = 80 + B = 80 + 11 = 91 (decimal)

Table 1.5 _________________________________________________________________ _ Decimal Number _ 0 0 0000 1 1 0001 2 2 0010 3 3 0011 4 4 0100 5 5 0101 6 6 0110 7 7 0111 8 8 1000 9 9 1001 10 A 1010 11 B 1011 12 C 1100 13 D 1101 14 E 1110 15 F 1111 _________________________________________________________________ _ Binary representation of Hexadecimal number _________________________________________________________________ Hexadecimal Number

Page 10

Computer Architecture and Microprocessor

Table 1.6 __________________________________________________________________ Decimal Number Hexadecimal Number Binary representation of Hexadecimal number _________________________________________________________________ _ 42 2A 00101010 92 5C 01011100 163 A3 10100011 168 A8 10101000 180 B4 10110100 185 B9 10111001 188 BC 10111100 191 BF 10111111 202 CA 11001010 205 CD 11001101 222 DE 11011110 228 E4 11011111 236 EC 11101100 239 EF 11101111 248 F8 11111000 254 FE 11111110 255 FF 11111111 _________________________________________________________________ _ Example 2. Convert the hexadecimal number 5A9 to its decimal equivalent. 9 is the 1st digit from the right; its weight is 9 x 160 A is the 1st digit from the right; its weight is A x 161 5 is the 1st digit from the right; its weight is 5 x 162 5A9 (hex) = 5 x 162 + A x 161 + 9 x 160 = 5 x 256 + 10 x 16 + 9 = 1280 + 160 + 9 = 1449 (decimal). Example 3. Convert the hexadecimal number 1A5D to its decimal equivalent. 1ASD (hex) = 1 x 163 + A x 162 + 5 x 161 + D x 160 = 4096 + 10 x 256 + 80 + 13 x 1 = 4096 + 2560 + 80 + 13 = 6749 (decimal). Conversion of a Decimal Number to Hexadecimal Number. Page 11

Computer Architecture and Microprocessor


For the conversion of a decimal number to a hexadecimal number, the technique of division by 16 can be used. The following examples will illustrate the technique; Example 1. Convert 35 (decimal) to its equivalent hexadecimal number.

The hexadecimal number is 23. Check: 23 (hex) = 2 x 161 + 3 x 160 = 32 + 3 = 35 (decimal)

Example 2. Convert the decimal number 10767 to a hexadecimal number.

The hexadecimal number is 2A0F Check: 2A0F (hex) = 2 x 163 + A x 162 + 0 x 161 + F x 160 = 2 x 4096 + 10 x 256 + 0 + 15 = 10767 (decimal) Conversion of a Hexadecimal Number to Binary Number. For the conversion of a hexadecimal number to binary number each digit of the given hexadecimal number is converted to its 4-bit binary equivalent. Example 1. Convert the hexadecimal number 5CB8 to its equivalent binary number (5CB8)16 = (0101)(1100)(1011)(1000)

Page 12

Computer Architecture and Microprocessor


5 C B 8 = (0101110010111000)2 Example 2. Convert the hexadecimal number 4F2D to its binary equivalent. (4F2D)16 = (0100)(1111)(0010)(1101) 4 F 2 D = (0100111100101101)2

Codes for mnemonics and data are entered into a microprocessor kit in the form of hexadecimal for the convenience of the programmer. Internally they are converted to binary equivalent as shown in Tables 1.5 and 1.6 for processing by the microprocessor. 1.2.6 ASCII Code

The ASCII stands for American Standard Code for Information Interchange. ASCII is pronounced as ask-ee. The ASCII code is widely used in small computers, peripherals, instruments and communication devices. It is a 7-bit code. Microcomputers having 8-bit word length use 7 bits to represent the basic code. The 8 th bit is used for parity or it may be kept permanently 1 or 0. Table 1.7 shows ASCII codes. Table 1.7. ASCII Codes _______________________________________________________ ASCII Code Character ASCII Code Character _______________________________________________________ 00 NUL 21 ! 01 SOH 22 02 STX 23 # 03 ETX 24 $ 04 EOT 25 % 05 ENQ 26 & 06 ACK 27 07 BEL 28 ( 08 BS 29 ) 09 HT 2A * 0A LF 2B + 0B VT 2C , 0C FF 2D 0D CR 2E . 0E S1 2F / 0F S0 30 0 10 DLE 31 1 11 DC1 (X-on) 32 2 12 DC2 (Tape) 33 3 13 DC3 (X-off) 34 4 14 DC4 35 5

Page 13

Computer Architecture and Microprocessor


15 16 17 18 19 1A 1B 1C 1D 1E 1F 20 NAK SYN ETB CAN EM SUB ESC FS GS RS US SP 36 37 38 39 3A 3B 3C 3D 3E 3F 40 41 6 7 8 9 : ; < = > ? @ A

_______________________________________________________ ASCII Code Character ASCII Code Character _______________________________________________________ 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F 50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^() 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 73 74 75 76 77 78 79 7A 7B 7C 7D 7E 7F c d e f g h i j k l m n o p q r s t u v w x y z { | } ~ DEL

Page 14

Computer Architecture and Microprocessor


5F 60 61 62 ASCII-8. A new version of ASCII is ASCII-8. In ASCII seven bits represent the basic code. 7 bits can represent up to 128 characters. ASCII-8 uses 8 bits represent the basic code of a character. 8 bits can represent up to 256 characters. Table 1.8 shows ASCII-8 codes. ASCII-8 code is used in larger machines whereas ASCII in smaller machines. Table 1.8 ASCII-8 Codes _________________________________________________________________ _ Character ASCII-8 Code Character ASCII-8 Code _________________________________________________________________ _ 0 50 J AA 1 51 K AB 2 52 L AC 3 53 M AD 4 54 N AE 5 55 O AF 6 56 P B0 7 57 Q B1 8 58 R B2 9 59 S B3 A A1 T B4 B A2 U B5 C A3 V B6 D A4 W B7 E A5 X B8 F A6 Y B9 G A7 Z BA H A8 I A9 _________________________________________________________________ _ In ASCII-8 first 4 bits are called zone bits and remainder 4 bits are numeric bits. In ASCII first 3 bits are zone bits and remainder 4 bits are numeric bits. In ASCII and ASCII-8 numeric bits are identical but zone bits are different. -() a b (RUB OUT)

Page 15

Computer Architecture and Microprocessor


1.2.7 Boolean Algebra

Boolean algebra is the algebra of binary variables. Binary variables are also known as Boolean variables. Boolean variables have only two values 0 and 1. The only logical operations in Boolean algebra are OR, AND and NOT operations. The mathematical symbol + and. stand for OR and AND operations respectively. The symbol or is used for NOT (INVERT or COMPLEMENT) operation. There are two important means to express the relationship between input and output variables of a digital system: truth table and Boolean expression. The aim of a designer is to use minimum number of basic circuits to perform a digital operation. First of all he writes Boolean expression for the system. This expression is minimized using Boolean theorems. The map is also a minimization technique which is used in the design of digital systems. The minimum number of circuits reduces the cost and increases the speed and reliability of the system. Table 1.9 presents a list of important Boolean theorems. The first four laws are for OR operation. These laws can easily be explained. The first law is A+0=A The binary variable A can have only two values, either 0 or 1. When 0 is put in the above equation for A, the above equation becomes: 0 + 0 = 0; This is true for OR operation. Similarly, when 1 is put for A, we get 1 + 0 =1 which is true for OR operation. Similarly, other OR laws can be explained putting 0 or 1 value for the binary variable A. Table 1.9. Boolean Algebraic Theorems _______________________________________________________________________ _ 1. A + 0 = A 2. A + 1 =1 3. A + A = A 4. A + A = 1 5. A.0 = 0 6. A.1 = A 7. A.A =A 8. A . A=0 AND Laws OR Laws

Page 16

Computer Architecture and Microprocessor


9. A = A 10. 1 = 0 11. 0 = 1 12. If A=0, then A =1 13. If A=1, then A = 0 14. A + B = B + A 15. A .B = B .A Commutative laws Associative Laws 16. (A + B) + C = A + (B + C) = A + B + C 17. (A.B).C = A.(B.C) = A.B.C 18. A . (B+C) = A .B + A .C 19. A + B.C = (A + B) . (A + C) 20. A + AB = A 21. A . (A + B) = A 22. A + AB = A + B 23. A . (A + B) = AB 24. AB + AB = A 25. (A + B)(A + B) = A 26. AB + AC = (A + C) . (A + B) 27. (A + B) ( A + C) = AC + AB 28. AB + AC + BC = AB + AC 29. (A + B)(A + C)(B + C) = (A + B)(A + C) 30. (A + B +C + ) = A . B . C. 31. A.B.C = A + B + C De Morgans Theorem ____________________________________________________________________________________ In Table 1.9, laws 5 to 8 are for AND operation. The fifth law is A.0 = 0. This law is true for AND operation when A is put equal to either 0 to 1. Similarly, laws from 6 to 8 are true when the value of A is put either 0 or 1. Laws of complementation from 9 to 13 are obvious and very simple. The law 14 is for an OR gate : A + B = B + A. If the input signals of an OR gate are transposed, the output remains same. Each of the binary variables A and B can have either 0 or 1 value. Put all possible values for A and B in the law 14 and see that the relation is true for OR law. The law 15 is for AND operation. If the input signals are Page 17 Distributive Laws Laws of complementation

Computer Architecture and Microprocessor


interchanged, the output remains same. This can also be explained by putting binary values for A and B in the equation. The law 16 is associated law for OR operation. (A + B) + C = A + (B +C) = A + B + C To explain this take two OR gates. A and B are used as input signals for the 1st OR gate. Now the output of the 1st gate and C are fed to the 2nd gate. You will see that the output of the 2nd gate is equal to A + B + C. Again apply B and C to the 1 st OR gate. The output of the 1st gate and A are applied to the 2nd OR gate. The output of the 2nd gate will be equal to A + B + C. This can also be explained by putting binary values of A, B and C in the equation. For example, suppose A = 1, B =0 and C = 1. Now the equation 16 becomes: (1 +0) + 1 = 1 + (0 +1) Performing OR operation on binary numbers in brackets both sides, we get 1+1=1+1 or 1 = 1, which proves the law. Also, A + B + C = 1 + 0 + 1 = 1. The law 17 is for AND operation. This can also be explained as 16 has been explained. Law 30 is DeMorgans Theorem. For a two input system the law is A + B = A.b. Now forming the truth tables for A + B and A.B it can be shown that this law is true. This has been proved while discussing NOR gate in section 2.15.5. Law 31 is also DeMorgans Theorem. For a two input system the law is A.B = A + B. This can be proved forming truth tables for A.B and A + B for all possible binary values of A and B. This has already been shown in Section 2.15.4 while discussion NAND gate. Other laws of Table 1.9 can be proved using truth table or with the help of other Boolean laws. 1.2.8 Flip Flop

A digital computer needs devices which can store information. A flip flop is a binary storage device. It can store binary bit either 0 or 1. It has two stable states: HIGH and LOW, i.e. 1 and 0. It has the property to remain in one state indefinitely until it is directed by an input signal to switch over to the other state. It is also called bistable multivabrator. It is a basic memory element. The basic flip flops are S-R Flip flop, J-K Flip flop, D Flip flop, T Flip flop. For Presetting and Clearing of a clocked flip flop may be required before applying inputs. For this purpose Preset (PR) and Clear (CLR) terminals are provided. PR and CLR are not low at the same time. Preset and Clear are used to initialize a circuit before normal clocked operation of the circuit starts.

Page 18

Computer Architecture and Microprocessor

1.2.9

Register

A flip-flop is the basic memory unit. It is a 1-bit memory element. A register is a set of flip-flops to store a binary word. To store n-bit binary word a set of n flip-flops is used. They are connected in parallel. All bits of an n-bit word are transferred to the register in parallel. Unlike a latch, the contents of a register can be read at any time without altering its value. Generally the registers are built using edge-triggered D flip-flops. 74273 is an 8-bit register. It consists of 8 positive-edge-triggered flip-flops with common clock and clear lines. The registers of a microprocessor are within the microprocessors IC package. Shift Register A shift register is a register, which is able to shift its content within itself without changing the order of the bits. It may be designed to shift date either left or right. The data is shifted one bit at a time when a clock pulse is applied. The shift register is used for temporary storage of data. The shift register is used for multiplication and division where bit-shifting is required. The shift registers can be built using R-S, J-K or D flipflops. In a shift register the output of one flip-flop is connected to the input of the following flip-flop. Shift registers are not suitable for storing large data as they are slow. Depending on the modes of loading and reading data there are following 4 types of shift registers: (i) Serial-in, serial-out (ii) Serial-in, parallel-out (iii) Parallel-in, serial-out (iv) Parallel-in, parallel-out 1.2.10 Counters

The function of a digital counter is to count the number of electrical pulses. To count certain events electrical pulses proportional to the number of events are generated for counting. Digital counters consist of flip-flops. There are two types of digital counters namely, asynchronous and synchronous. In an asynchronous counter all flip-flops are not clocked simultaneously. On the other hand in a synchronous counter all flip-flops are clocked simultaneously. Synchronous counters are faster than asynchronous counters due to simultaneous clocking of flip-flops. If a counter consists of n flip-flops it may count pulses up to 2n. A ripple counter is an asynchronous counter. The pulses to be counted are applied to the CLK (clock) terminal of the 1st flip-flop of the counter. The output Q of the 1st flip-flop is connected to the CLK terminal of the 2nd flipflop. Similarly, the output Q of the 2nd flip-flop is connected to the CLK terminal of the 3rd flip-flop. In this way the Q output of one flip-flop is connected to the CLK terminal of the next flip-flop. If there are 4 flip-flops in a counter, the counter will count from 0000 to 1111 and it is called a 4-bit binary counter. There is CLR terminal to clear the counter.

Page 19

Computer Architecture and Microprocessor


Up Counter. An up counter counts upwards starting from zero. A four bit binary counter counts from 0000 to 1111. Down Counter. A down counter counts downward starting from the maximum value. For example, a down counter containing 4 flip-flops starts counting from 1111 and goes to 0000. In a down counter the Q output of the flip-flop of one stage is connected to the CLK terminal of the flip-flop of the next stage. There is an arrangement for presetting that stores 1111 as initial value. When counting goes on each electrical pulse reduces this initial number by 1. Up-Down Counter. In an up-down counter a control circuit is added to make a counter either an up counter or a down counter. An U/ D control terminal is provided in an up-down counter. If U/ D is high the counter becomes an up counter. If U/ D is low the counter becomes a down counter. Controlled Counter. A controlled counter counts electrical pulses only when it is asked to do so. There is a terminal COUNT to control counting. When COUNT is high the counter counts electrical pulses applied to it. When COUNT is low the counter does not make counting even though the pulses may remain applied to it. All flip-flops remain latched in this situation. Ring Counter. A ring counter uses D flip-flops. The output Q of the last stage is fed back to the D input of the 1st stage. CLK terminal of all flip-flops are connected to the clock pulses. All flipflops are clocked simultaneously. Fig 1.2 shows 4-bit ring counter. For initial setting, when CLR first goes low and then becomes high, an initial word 0001 is set in a 4-bit ring counter. Now each clock pulse shifts the content of the counter left by one bit. MSK is shifted to LSB. A ring counter is used to control a sequence of operations. A number of devices may be connected to Q0, Q1, Q2, Qn outputs of a ring counter. The outputs Q0, Q1, Q2, Qn go high in a sequence. When CLR goes low Q 0 becomes high and it activates the device connected to it. Then CLR returns high. Now clock pulses applied to CLK terminal activates devices connected to their Q outputs in a sequence. The 1st clock pulse makes Q1 high, the 2nd clock pulse makes Q2 high and so on. At a time only one output is high and others are low. In a digital computer a number of digital circuits are to be activated in a right sequence at precisely right time to fetch and execute instructions. A ring counter is quite suitable for this task.

Page 20

Computer Architecture and Microprocessor

Fig 1.2 Ring Counter. Modulus of a Digital Counter. The number of output states of a counter is called its modulus. A 4-bit binary counter has distinct states numbered from 0000 to 1111. So its modulus is 16. Binary Counter. In a binary counter the output Q of the flip-flop of one stage is connected to the clock terminal of the next stage. All flop-flop are connected to work as a T flip-flop. A T flipflop changes the state of its output on the receipt of a clock pulse. Suppose that a flipflops output is 0. Now a clock is applied. On the receipt of positive clock pulse the output changes from 0 to 1. When clock pulse goes low there is no change. Again when clock pulse becomes high the output changes from 1 to 0. Thus on the receipt of 2 clock pulses the flip-flop gives one pulse at its output. Suppose, there are 3 flip-flops in a counter. If 8 clock pulses are applied to its input terminal (which is the clock input terminal of the first flip-flop) the first flip-flop gives 4 pulses at its output terminal Q0. As the Q0 is connected to the clock input terminal of the second flip-flop, the 2 nd flip-flop gives 2 pulses at its output terminal Q1. As Q1 is connected to the input terminal of the 3rd flip-flop, the 3rd flop-flop will give one pulse at its output terminal Q2. Thus the counter can count from 000 to 111. So it is a 3-bit binary counter. Similarly, a counter containing 4 flip-flops counts from 0000 to 1111 and it is a 4-bit binary counter. Fig.1.3 (a) shows a 4-bit binary counter. Fig. 1.3 (b) shows its timing diagram. Binary counter can be built using J-K flip-flops. They are also available in ICs. 7493, 74161, 74163, 74191, 74193 etc. are 4-bit binary counters.

Page 21

Computer Architecture and Microprocessor

Fig 1.3 (a) 4 Bit Binary Counter (b) Timing Diagram Decade Counter. A decade counter counts from 0000 to 1001. Its outputs are in BCD. It is also called Mod-10 counter, divide by 10 counter or BCD counter. The counter circuit is designed to generate a clear signal on the receipt of 10th clock pulse and the setting becomes 0000. Thus for every 10 clock pulses it will give one pulse at its output terminal. Examples of decade counters are: 7490, 74160, 74162, 74168, 74190, 74192, 74176 etc. Presettable Counter. In presettable counter the counting starts from a number greater than zero. This number can be set before the counting starts. Suppose a presettable counter has 4 flip-flops. It can count from 0000 to 1111. Now the counter is preset at 0100. When counting starts the count will become 0101 on the receipt of 1st clock pulses. In this way it will count upto 1111 on the receipt of subsequent pulses. In a presettable counter the modulus becomes variable. It has a programmable modulus. Counters can be built using flip-flops in the laboratory but the use of counter IC will be convenient. Table 1.10 shows counter Ics. Table 1.10 Counter ICs. _______________________________________________________________________ _ Page 22

Computer Architecture and Microprocessor


IC No. Description _______________________________________________________________________ _ 7490, 74290 Decade counters; divide by 2 and 5; Asynchronous 7492 Asynchronous counter, divide by 12 counter; divide by 2 and divide by 6. 7493, 74293 4-bit Binary Counters; Asynchronous, divide by 2 and divide by 8. 74176, 74196 Presettable Decade/Bi-quinary Counter; Asynchronous. 74177, 74197 4-Bit Binary Counters; Presettable; Asynchronous. 74390 Dual Decade Counters; Bi-quinary or BCD; Asynchronous. 74393 Dual 4-Bit Binary Counters; Asynchronous. 74490 Dual Decade Counters; Asynchronous. 74160, 74162 Synchronous Decade UP Counters. 74161, 74163 Synchronous 4-Bit Binary UP Counters. 74168, 74190, 74192 Synchronous Decade UP/DOWN Counters. 74169, 74191, 74193 Synchronous 4-Bit Binary UP/DOWN Counter. _______________________________________________________________________ _ 1.2.11 Basic Logic Gates

The building blocks of the digital world are the transistor arrangements designed to output a known result with the application of a signal. The transistors in this capacity act as simple switches but it is the particular configuration, which will yield the desired results. Since many of the devices used in digital electronics are prepackaged, we can represent them by symbol and truth table. Some of these configurations are shown here. And Gate The AND gate requires both inputs to be logic 1 for an output of logic 1

Fig 1.4 AND GATE

Truth Table of And gate. Q = A * B or Q = A . B INPUT


A OUTPUT

0 0 1 1

B 0 1 0 1 Page 23

Q 0 0 0 1

Computer Architecture and Microprocessor


OR Gate If any input is logic 1 then the output of an OR gate will be logic 1.

Fig 1.5 OR GATE

Truth Table for Or gate: Q=A+B INPUT


A OUTPUT

0 0 1 1 INVERTER / Not Gate

B 0 1 0 1

Q 0 1 1 1

Any input is inverted. Logic 1 in yields logic 0 out and vice versa.

Fig 1.6 NOT GATE Truth Table for Not gate: Q=A INPUT
A OUTPUT

0 1 NAND Gate

Q 1 0

It is a combination of AND gate and a NOT gate as shown in the fig 1.7 below.

Fig 1.7

Page 24

Computer Architecture and Microprocessor


The actual diagrammatic representation is as shown below.

Fig 1.8 NAND GATE

Truth Table for Nand gate: Q= A.B INPUT


A OUTPUT

0 0 1 1 NOR Gate It is a combination of

B 0 1 0 1

Q 1 1 1 0

OR gate and NOT gate as shown in the fig 1.9 below.

Fig 1.9

The actual diagrammatic representation is as shown below. Q=A+B

Fig 1.10

Truth Table for Nor gate: INPUT


A OUTPUT

0 0 1 1

B 0 1 0 1

Q 1 0 0 0

Page 25

Computer Architecture and Microprocessor


1.2.12 Logic Circuits

Fig. 1.11. Standard Symbols for logic gates. The three logical operations AND, OR and NOT are said to be logically complete, as any Boolean function may be realized using these three operations. The standard symbols given in Fig. 1.11. are used to represent the Boolean operations AND, OR and NOT respectively. Using these gates we can realize the expressions for sn and cn+1 given as Eq. (LC1) and Eq. (LC2) as shown in Fig. 1.12. This circuit is called a full adder circuit. If the three bits to be added are fed to the inputs an, bn and cn of the full adder, the sum sn and carry cn+1, appear as output. Eq (LC1) Eq (LC2) sn = an . bn . cn + an . bn . cn + an . bn . cn
+

an . bn . cn
+

cn+1 = an . bn . cn + an . bn . cn + an . bn . cn

an . bn . cn

Table 1.11 Truth table for a full adder an 0 0 0 0 1 1 1 1 bn 0 0 1 1 0 0 1 1 cn 0 1 0 1 0 1 0 1 sn 0 1 1 0 1 0 0 1 cn+1 0 0 0 1 0 1 1 1

Page 26

Computer Architecture and Microprocessor

Fig. 1.12. Gate realization of a full adder. The truth table of Table 1.11 may also be realized using a memory. Suppose we permanently store in a memory, which has 8 words, the values of sn and cn+1 of Table 1.11 as shown in Fig. 1.13. The information from the memory may be retrieved by placing the bits corresponding to an, bn, cn in the MAR of the memory.

Fig. 1.13. A ROM (read only memory) realization of a full adder. The contents of the specified location would appear in MDR and are the values of sn and cn+1. Such a memory where a table may be stored permanently is known as a read only memory (ROM). If we want to add three bits, we place the three bits in the MAR of the ROM. The sum bit and the carry bit appear in the MDR. A Seven-segment Display A seven-segment display is used extensively in calculators and in digital watches Numbers 0 to 9 are displayed by selectively lighting a group of light emitting diodes (LED) (Fig.1.14).

Page 27

Computer Architecture and Microprocessor

Fig. 1.14. A seven-segment display. We will now design a logic circuit which will accept a 4 bit binary equivalent of a decimal digit and give outputs to selectively light the LEDs. The output of the circuit will be 7 bits, one bit corresponding to each segment to be lighted in the seven-segment display. If the output bit is a 1 it will turn on the corresponding light in the display and if it a 0 it will turn it off. Figure 1.15 is a block diagram of the logic circuit to be designed for this purpose.

Fig. 1.15. Logic circuit to be designed for a seven-segment display decoder. Truth table corresponding to this logic circuit is developed as follows: Step 1: The 4 bit binary equivalent of the digits 0 through 9 are enumerated and one row of the truth table is formed for each of these. Step 2: For each row of the truth table the values of 7 output bits necessary to light the segments of the display corresponding to the input digit are entered. For example, to display the digit 0 the lights A, B, C, D, E and F of the seven-segment display should light up. Thus the bits for A, B, C, D, E and F should be 1 and the bit

Page 28

Computer Architecture and Microprocessor


corresponding to G should be 0. Similarly to display as 1 the bits B and C of the output should be 1 and A, D, E, F, G should be 0. Step 3: The combinations of input bits 1010, 1011, 1100, 1101, 1110 and 1111 should not occur, as the input is expected to be a single digit. If these inputs occur by mistake, an error indication should be displayed. We will use the letter E as an error indication and display it by lighting up segments A, F, G, E and D. The truth table developed is given as Table 1.12. Table 1.12 Truth table to design seven-segment display INPUTS X L E G A L 0 1 2 3 4 D 5 I 6 G 7 I 8 T 9 S E R R O R 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 Y 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 Z 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 W 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 A 1 0 1 1 0 1 1 1 1 1 1 1 1 1 1 1 B 1 1 1 1 1 0 0 1 1 1 0 0 0 0 0 0 OUTPUTS C 1 1 0 1 1 1 1 1 1 1 0 0 0 0 0 0 D 1 0 1 1 0 1 1 0 1 1 1 1 1 1 1 1 E 1 0 1 0 0 0 1 0 1 0 1 1 1 1 1 1 F 1 0 0 0 1 1 1 0 1 1 1 1 1 1 1 1 G 0 0 1 1 1 1 1 0 1 1 1 1 1 1 1 1

We can realize a logic circuit using AND, OR, NOT gates by writing Boolean expressions for A, B, C, D, E, F, G in terms of X, Y, Z, W. For example, the expression for B is B = X . Y. Z. W + X . Y. Z . W + X . Y. Z. W + X . Y. Z . W + X.Y.Z.W+X.Y.Z.W+X.Y.Z.W+X.Y.Z.W Similar expressions may be obtained for A, C, D, E, F and G.

Page 29

Computer Architecture and Microprocessor


Another method of realizing the seven segment display is to store the outputs A, B, C, D, E, F, G in a 16-bit word ROM. The 16 words of the ROM are the 16 rows of the truth table (Table 1.12). The address bits are X, Y, Z, W and the contents are the corresponding values of A, B, C, D, E, F, G in the truth table. For example, address 0111 corresponding to X = 0, Y = 1, Z = 1, W = 1 will store the bits A = 1, B = 1, C = 1, D = 0, E = 0, F = 0 and G = 0. In order to display a digit its binary values X, Y, Z, W are fed to the MAR of the ROM. The output of the ROM, which appears in a 7-bit MDR lights up the display LEDs. 1.2.13 TTL Circuit

It stands for transistor - transistor logic. It is a saturated logic because transistors operate between cut-off and saturation. It is the most widely used circuit line since early 1970s because of its speed, good fan in and fan out and easy interface with other digital circuitry. The unique feature of this circuit is that it uses multiple emitter transistor at the input which replaces the input diodes of the DLT (diode transistor logic). The number of emitters is equal to the desired fan in of the circuit. Since the multi emitter transistor is smaller in area than the diodes it replaces, the yield from a wafer is increased. Moreover, smaller area results in a lower capacitance to the substrate, thereby reducing circuit rise and fall times and hence increasing its speed. The family contains a very wide selection of circuit modules ranging from simple gates and flip flops in SSI circuit series through various registers in computer in MSI circuit series to micro processor bit slice chips in the LSI series. The basic circuit of the TTL family is the NAND gate . However, at present, NOR, OR and AND gate configurations have also been added to the series. TTL Sub families TTL series has several sub families having different speed and power dissipation characteristics as detailed below. 1. 74L00 series the letter L standing for low power consumption. It has an average power dissipation of 1 mW per gate but an average propagation delay of 33 ns. 2. 74H00 series the letter H standing for highest speed. It has a propagation delay 6 ns but an average power dissipation of 23 mW per gate. 3. 74S00 the letter S representing Schottky. It has the highest speed because its average propagation delay is just 3 ns per gate. However, its average power dissipation is 23 mW per gate. 4. 74LS00it is the most recent variant of TTL family. It is called low-power Schottky TTL. It has an average propagation delay of 9.5 ns and an average power dissipation of 2 mW.

Page 30

Computer Architecture and Microprocessor


1.2.14 Computer Languages

Computers recognize and operate in binary numbers. However, each computer has its own binary words, meanings, and language, The words are formed by combining a number of bits for a given machine. The word (or word length) is defined as the number of bits the computer recognizes and processes at a time. The word length ranges from 4 bits for small, microprocessor based computers to 32 bits for large computers as IBM 370. Another term commonly used to express word length is byte. A Byte is defined as a group of 8 bits. For example, a 16 bit microprocessor has a word length equal to 2 bytes. The term nibble, which stands for a group of 4 bits, is found also in popular computer magazine and books. A byte has two nibbles. Each machine has its own set of instructions based on the design of its CPU or of its microprocessor. To communicate with the computer, one must give instructions in binary language (machine language) . Since it is difficult for most people to write programs in sets of 0s and 1s, computer manufactures have devised English like words to represent the binary instructions of a machine. Programmers can write programs, called assembly language programs, using these words. Because an assembly language is specific to a given machine, programs written in assembly language are not transferable from one machine to another. To circumvent this limitation, such general purpose languages as BASIC and FORTRAN have been devised, a program written in these languages can be machine independent. These languages are called high-level languages. MACHINE LANGUAGE The number of bits in a word for a given machine is fixed, and words are formed through various combinations of these bits. For example, a machine with a word length of 8 bits can have 256 (2^8) combinations of bits --- thus a language of 256 word. However, not all of these words need be used in the machine. The microprocessor design engineer selects combinations of bit patterns and gives a specific meaning to each combination by using electronic logic circuits; this is called an instruction. Instructions are made up of one word or several words. The set of instructions designed into the machine makes up its machine language, a binary language, composed of 0s and 1s, that is specific to each computer. The microprocessor determines the machine language and the operations of a microcomputer. ASSEMBLY LANGUAGE The first step in the evolution of programming languages was the development of what is known as an assembly language. In an assembly language, mnemonics are used to represent operation codes, and strings of characters to represent addresses. We give in Table 1.15 a machine language program and the corresponding assembly language program. It may be observed from this table that there is a one-to-one correspondence between the assembly language program and the equivalent machine language program. As an assembly language is designed mainly to replace each machine code with an

Page 31

Computer Architecture and Microprocessor


understandable mnemonic and each address with a simple alphanumeric string, it is matched to a particular computers processor structure. It is thus machine dependent. It is necessary for an assembly language programmer to know all details of a computers logical structure in order to write a program. It thus suffers from many of the disadvantages of machine language. Table 1.15 An assembly program and a machine language equivalent Assembly code Machine code

Operation

Operand address K L M N K L M N W W

Operation

Operand address 200 201 202 203 200 201 202 203 204 204

READ READ READ READ CLA ADD ADD ADD STO PRT HLT

A A A A 1 2 2 2 6 B F

In order to execute an assembly language program on a computer, it should first be translated to its equivalent machine language program. This is necessary because the computers circuitry is designed to execute only the operation codes of the machine. The mnemonic operation codes should be converted to absolute numeric operation codes. The symbolic addresses of operands used in assembly language should also be converted to absolute numeric addresses. The translator which does this is known as an assembler (see Fig 1.13). The input to an assembler is the assembly language program and is known as the source program. Its output is the equivalent machine language program and is known as the object program. The assembler is a system program which is supplied by the computer manufacturer. It is writer by system programmers with great care.

Page 32

Computer Architecture and Microprocessor

Fig. 1.13. Depicting an assembler The main advantage of using an assembly language for programming is the efficiency of the machine language program resulting from it. As all hardware features available in the processor of the computer, such as registers, stacks, etc., are available to the assembly language programmer, he has great flexibility in writing programs well-matched to the computer. The main disadvantages of an assembly language are: (i) It is machine-dependent. Thus programs writer for one model of a computer cannot be executed on another model. In other words it is not portable from one machine to another. (ii) An assembly language programmer must be an expert who knows all about the logical structure of the computer. (iii) Writing assembly language programs is difficult and time-consuming. The trend is thus to avoid using an assembly language. It is used only when efficiency is of paramount importance or when there is a need to control input/output devices. Efficiency is important, for instance, if a command to control an aircraft is to be given in a specified time. In cost-sensitive applications for instance, when a microprocessor based controller is used as a consumer item such as a washing machine, the program is stored in a Read Only Memory and the size of ROM determines the cost. The control program has to be concise and Assembly Language ensures this. Executing Assembly Language Program Before a machine language program can be executed it must be stored in the memory of the computer. The program itself may be stored in a floppy disk. This program is to be read into memory by another program called loader which is already stored in the memory. The loader, or initial program load (IPL) is automatically stored in the memory by the hardware of the computer when the computer is switched on. In cheaper computers this program (in binary form) is loaded manually by the operator using console switches.

Page 33

Computer Architecture and Microprocessor


The loader program should be kept small. Thus a procedure is used whereby the first few instructions of the loader load the rest of the loader program. This is called bootstrap loading. In order to execute an assembly language program, we must first store the loader. The loader than reads the assembler from a peripheral memory (such as a disk) and stores it in the main memory. The assembler would be written in machine language and can thus directly be stored in memory. The assembler now reads each instruction of the assembly language, translates it into a machine language statement, and stores it in the main memory. When the entire assembly language is translated and stored, control is passed to the first instruction of the assembled user program. This program now starts execution. The progression of storing various programs in main memory is illustrated in Fig. 1.14. It may be observed from Fig. 1.14 that only part of the main memory is available to store a users program. The available free memory for user programs is usually specified by the manufacturer.

Fig. 1.14. Illustrating memory allocation for loader and assembler The assembler is a machine language program. Since machine language programs are difficult to write, it would be preferable to keep them small. One method of doing it would be to define an assembly language with minimal features and write an assembler for this language. A higher level assembly language may then be defined (for example one which allows expressions for addresses). A program in this higher level assembly language may be translated by a program written in the rudimentary assembly language

Page 34

Computer Architecture and Microprocessor


for which a translator already exists. This is a very powerful method and is called bootstrap method. Figure 1.15 illustrates this.

Fig. 1.15 Illustrating bootstrap method of writing translators (Boxes 1 and 2 are identical) HIGHER LEVEL PROGRAMMING LANGUAGES During the evolution of computers, till about 1955, computers were slow, and had a small memory. Thus programming efficiency was very important and assembly language was dominant. The use of computers was also limited to a small group of scientists. With improvements in technology, computers were designed with larger memory capacity, higher speed and improved reliability. The tremendous potential of computer applications in diverse areas was foreseen. It was evident that this potential could be realized only if a non-expert user could effectively use the computer to solve problems. It was thus clear that a user should be concerned primarily with the development of appropriate algorithms to solve problems of interest to him and not with the details of the internal logical structure of computer. Consequently a good notation to express algorithms became an essential requirement. It would be ideal if an algorithm written in a natural (spoken) language such as English were translated to machine language automatically by the computer and executed. This is not possible because natural languages are not precise or unambiguous. The interpretation of the meaning of a natural language sentence depends on the context also. For example, the sentence Give me a ring may mean either give me a ring to wear or a ring on the telephone depending on the context. In fact the whole profession of lawyers would be redundant if sentences had unique interpretation! Thus if algorithms are to be executed by computers, it is necessary to develop a simple, concise, precise and unambiguous notation to express them. The notation should also match the type of algorithm. For example, algorithms to solve science and engineering problems would have complex arithmetic operations and would use mathematical Page 35

Computer Architecture and Microprocessor


functions such as tan x, cos x, etc. Thus a notation to express such algorithms should include complex arithmetic operations and mathematical functions. On the other hand, algorithms for processing business data would have operations to be performed on massive amounts of organized data known as files. Arithmetic to be performed would be very simple. The notation in this case must facilitate describing files and procession files. Such notations used to express algorithms are known as high level procedure oriented programming languages. In general, a high level language to express algorithms must have the following features: (i) Facility to describe the nature of the data to be processed. For example specifications of integers, reals, complex numbers, characters, etc., are required depending on the nature of the application area and the associated algorithm. Besides individual data items, collection of similar types of data making up a composite, known as a data structure, is very important in developing algorithms. Examples of data structures are arrays, matrices, sets, and strings of characters. Each high level language has the facility to describe some of these structures depending on the area of application of the language. Rigid specification of variable type is one of the most important features of high level procedure oriented programming languages. (ii) Operators which are appropriate to the data items and data structures in the language. For example, if we have a facility to represent complex numbers, then, complex addition, subtraction, multiplication and division operations would be useful. (iii) A set of characters using which symbols in the language are constructed. These symbols have a precise meaning in the context of the language. For example, the symbol ** is used to represent the exponentiation operation in FORTRAN. Thus A**B would mean raising A to the power B. (iv) Control structures to sequence the operations to be performed are important. In most algorithms one would find alternate set of statements to be performed on the basis of testing a condition. Repetition of groups of statements is also necessary in many algorithms. Thus a high level language should provide control structures appropriate to express algorithms. For example, a common control structure found in a high level language is: if A > B then X := Y + Z else X := P + Q; which means compare the numbers stored in A and B. If the number stored in A is larger than that stored in B then add the number stored in Y to that stored in Z and place it in X, otherwise add the number stored in P with that stored in Q and place it in X. A repetition structure, for example, is I := 0; Repeat I := I =1; SUM := SUM + N; N := N + 5;

Page 36

Computer Architecture and Microprocessor


Until (I = 10); (v) A set of words each with a precise and unambiguous meaning and a role to play in creating the program. For example the words READ, WRITE, DO have specific meanings in the context of a programming language. (vi) A set of syntax rules which precisely specify the combination of words and operators permissible in the language. For example, a language may specify that A * B is a legal combination in a language whereas A*/B may be illegal. The syntax rules are rules of grammar valid for the language. These rules are derived systematically and their number is kept small to enable users to memorize them. (vii) A set of semantic rules which assign a single precise and unambiguous meaning to each legal syntactic structure in the languages. For example, the statement: C = B/D would have the meaning Divide the number stored in B by the one stored in D and store the result in C in a particular high level language. A syntactically correct statement is not necessarily semantically meaningful. In natural language (English) for instance the sentences: Ram plays football Football plays Ram are both syntactically correct. The second sentence, however, is semantically meaningless. Similarly the statement C = B/D would be semantically incorrect if D happens to be zero. In high level languages for computers, there should be no semantic ambiguity. Each syntactically correct structure should have one and only one semantic interpretation. This is in contrast with natural languages. The syntax and semantic rules of the language, besides being concise and precise, should aid in understanding the program. An understandable program is self-documenting and thus easily maintainable. One chooses words in the language which have meanings similar to their meanings in English usage. Besides this, the facility to intersperse the program with comments (which are not part of the program) should be provided to aid program understanding. High level languages are designed independent of the structure of a specific computer. This facilitates executing a program written in such a language on different computers. Associated with each high level language is an elaborate computer program which translates it into the machine language of the computer in which it is to be executed. The translator program is normally written in the assembly language of that computer. Figure 1.16 explains various terms used in high level language translation. Figure 1.17

Page 37

Computer Architecture and Microprocessor


illustrates how machine independence is achieved by using different translators t translate a high level language program to machine languages of different computers.

Fig. 1.16. Illustrating terminology used in High level language translation.

Fig. 1.17. Illustrating machine independence of High level language. In Table 1.16 we illustrate a small program written in a high level language called BASIC and the equivalent machine language program of HYPCOM. Observe that one high level language statement is translated into many machine language statements. This is one-tomany translation. The terminology, high level language, arises due to this. An assembly language is a low level language as its translation to machine language is one-to-one. It is possible to translate a high level language to one at a lower level, but the reverse is not always possible. There are two approaches to writing language translators. One method is to take one statement of a high level language at a time and translate it into a machine instruction which is immediately executed. This is called an interpreter. Interpreters are easy to write and they do not require large memory space in the computer. The main disadvantage of interpreters is that they require more time to execute on a computer.

Page 38

Computer Architecture and Microprocessor


The other approach to translation is to store the high level language program, scan it and translate the whole program into an equivalent machine language program. Such a translator is known as a compiler. A compiler is a complex program compared to an interpreter. It takes more time to compile than to interpret. However, a compiles machine language program runs much faster than an interpreted program. Table 1.16. A basic program and its HYPCOM machine language equivalent
BASIC PROGRAM 10 20 READ K, L, M, N LET W = K + L + M + N MACHINE LANGUAGE A A A A 1 2 2 2 6 B F 200 201 202 203 200 201 202 203 204 204

30 40

PRINT W END

The difference between an interpreter and a compiler may be understood with the help of the following analogy. Suppose we want to translate a speech from Russian to English. There are two approaches one can use. The translator can listen to a sentence in Russian and immediately translate it to English. Alternatively, the translator can listen to the

Page 39

Computer Architecture and Microprocessor


whole passage Russian and then give the equivalent English passage. If the speaker repeats the same or a similar sentence, then, in the first case, the equivalent English sentence will also be repeated. In the second case, the translation will be more concise as the English equivalent of a whole passage will be given and there will be no repetitions. A person who can translate a whole passage has to be a better translator and must remember more information than one who translates sentence-by-sentence. An interpreter is similar to sentence-by-sentence translation whereas a compiler is similar to translation of the whole passage. High level languages which have the power to express a general class of algorithms are known as procedure oriented languages. These languages express in detail the procedure used to solve a problem. Another class of high level languages is called problem oriented languages. These languages are designed to solve a narrower class of problems. A user of such a language need not express in detail the procedure used to solve the problem. Ready-made procedures are preprogrammed. The user merely presents the input data to the program in a flexible language. For example, a problem oriented language called STRESS (STRuctural Equation System Solver) accepts a description of a set of equations to be solved in a simple format of the type: NODE 5 IS AT X = 25.5, Y =20.7, Z = 10.5 BEAM BETWEEN NODES 5 AND 15 It then generates the equations to be solved, invokes built-in algebraic equation solution program and solves the problem. The user need not specify how to solve the problem. He merely has to state what problem is to be solved using the appropriate language. Recent popular problem oriented languages are MATLAB and MATHEMATICA. MATLAB is popular among scientists and engineers to solve a wide class of problems modeled by differential equations, and matrices. MATHEMATICA is used to simplify complex algebraic expressions, find expressions resulting from indefinite integration and similar symbol manipulation problems. COMPILING HIGH LEVEL LANGUAGE PROGRAM In this section we will discuss briefly the steps in compiling a high level language program to an executable machine language program. Broadly the compilation process consists of two steps. The first step is the analysis of the source program and the second is the synthesis of the object program in the machine language of the specified machine (Fig. 1.18). The analysis step uses the precise description of the source programming language. A source language is described using Lexical rules, Syntax rules and Semantic rules.

Page 40

Computer Architecture and Microprocessor

Fig. 1.18

Block diagram of a language processor

Lexical rules specify the valid syntactic elements or words of the language. Syntax rules specify how the valid syntactic elements are combined to form statements of the language. Semantic rules assign meaning to valid statements of the language. Consider, for example, the following statement in a high level language. Principal := principal * ( 1 + rate/100) The syntactic elements of the statement are: Principal, := , *, ( , 1, +, rate, /, 100, and ) The syntactic elements principal and rate are called identifiers. The symbols :=, +, and / are operators. The numbers 1 and 100 are integer constants and the symbols (,) and ; are called delimiters. Each syntactic element is defined using the syntax rules of the language. The syntax rules are given using a notation called Backus Naur form abbreviated BNF in honor of Backus and Naur who invented this notation to describe computer languages. Each syntactic unit is given a name and shown as <name>. For example the syntactic unit digit is defined as: <digit> 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | The arrow represents defined as and the vertical bar | is used to represent or. The above definition is thus read as: <digit> is defined as 0 or 1 or 2 or 3 or 4 or 5 or 6 or 7 or 8 or 9. We define letter as: <letter> a | b | c .. x | y | z In other words a <letter>is any one of the lower case English letters. These characters are combined to form a syntactic unit called <identified> which is defined as: <identifier> := <letter> | <identifier><letter>|<identifier><digit> Observe that the above definition is defined in terms of itself. This is called a recursive definition. Using this rule the following are valid identifiers: p, pr, pr2, principal

Page 41

Computer Architecture and Microprocessor


as p is an <identifier> and pr is an <identifier> followed by a <letter> r which is also an <identifier>. Following this argument pr followed by a <digit>, namely, 2 is also an <identifier>. The reader can verify that principal is a valid identifier as it conforms to these rules. On the other hand the following are not valid identifiers as they violate syntax rules. 2p, 2 + p, p 2, ? x, x; y Some other rules are: <a.o.> + | - | * | / where <a.o.> is abbreviation for <arithmetic operator> <delimiters> ) | ( | ; <assignment operators> := Having defined the words of the language, we next define how sentences of the language are formed using syntax rules. Some of the rules are given below: <arithmetic expression> <identifier><a.o.><identifier> where <a.e.> is abbreviation for arithmetic expression <a.e.> <(a.e.)> <a.e.> <a.e.> <a.o.> <a.e.> An arithmetic statement is defined as: <arithmetic statement> <identifier> := <a.e.>. Using the above rule and the rules given earlier defining <a.e.> it can be verified that principal := principal * ( 1 + rate/100 ) is a syntactically correct arithmetic statement. Next we have to assign meanings to syntactically correct units. In the above example the semantic interpretation of the computation is: Compute the value of the expression on the right hand side of := and replace the value stored in the variable name principal by this value. The rules on how to compute the arithmetic expression should also be specified as semantic rules. The steps used in the process of translating a high level language source program to executable code are given in Fig. 1.19. The first block is a lexical analyzer (or scanner). It reads successive lines of a program and breaks them into individual lexical items, namely, identifier, operator, delimiter etc., and attaches a type tag to each of these. Besides this it constructs a symbol table for each identifier and finds the internal representation of each constant. The symbol table is used later to allocate memory to each variable.

Page 42

Computer Architecture and Microprocessor

Fig. 1.19

Steps in translation of high level language to machine language

The second stage of translation is called syntax analysis or parsing. In this phase using the results of lexical analysis identifies expressions, statements, declarations etc.. Syntax analysis is aided by using techniques based on formal grammar of the programming language. In the semantic analysis phase the syntactic units recognized by the syntax analyzer are processed. An intermediate representation of the final machine language code is produced. This phase bridges the analysis and synthesis phases of translation (see Fig. 1.18). The last phase of translation is code generation. A number of optimizations to reduce the length of machine language program are carried out during this phase. The output of the code generator is the machine language program of the specified computer. If a subprogram library is used for if some subroutines are separately translated and compiled a final linking and loading step is needed to produce the complete machine language program ready for execution. If subroutines are separately compiled the addresses of the resulting machine language instructions will not be their final address when all the routines are place together In main memory. The linkers jobs are to find the correct locations of the final executable program. The loader will then place them in the memory at their right addresses.

Page 43

Computer Architecture and Microprocessor

1.3

Revision Points

Number System A number system is a set of numbers, or number like objects, together with one or more operations, such as addition or multiplication. ASCII The ASCII stands for American Standard Code for Information Interchange. The ASCII code is widely used in small computers, peripherals, instruments and communication devices. It is a 7-bit code. Flip Flop A flip flop is a binary storage device. It can store binary bit either 0 or 1. It has two stable states: HIGH and LOW, i.e, 1 or 0.

1.4

Intext Questions
1. Why does a computer use binary numbers for its internal processing? 2. What is a hexadecimal number system? What are its advantages over BCD system? 3. What is BCD system? Where is it used? 4. What is 1s complement of a binary number? Page 44

Computer Architecture and Microprocessor


5. What is 2s complement of a binary number? 6. What do you understand by ASCII? Discuss their area of applications. 7. What are logic gates? Discuss AND and OR gates with their truth tables. 8. What are register? 9. What is a counter? 10. What are the functions of buffers? 11. What do you understand by the term machine language? 12. What is an assembly language for a computer? 13. What is an assembler? 14. What is a high level language?

Summary
A computer understands information composed of only zeros and ones. In the binary number system there are only two digits 0 and 1. The binary digits are called bits. In BCD code a decimal digit is represented by four binary bits. The base of the hexadecimal number system is 16. George Boole a mathematician developed an algebra known as Boolean algebra. The set of programs which provides the environment to write application programs is known as the system software. A program written using the binary codes specified for the processors operations and absolute binary memory addresses is known as the machine language of the computer. An assembly language has a one-to-one correspondence with the machine language of a computer and is thus machine is dependent. A program written in a high level language is converted to a machine language program by a translator program. Translators are of two types: interpreter and compiler. An interpreter translates a program, one statement at a time and immediately executes it. A complier on the other hand translates the entire program into lower level language and then executes it. Boolean expressions corresponding to truth tables may be implemented as logic circuits using AND, OR and NOT gates. Syntax rules of a language are often described using a notation known BNF notation. A hexadecimal digit is represented by four binary bits. The addition of 2s complement of a number is equivalent to the subtraction of the number.

Page 45

Computer Architecture and Microprocessor


The base of the hexadecimal number system is 16. The 2s complement is used to represent the negative of a binary number. A flip-flop is the basic memory unit. The function of a digital counter is to count the number of electrical pulses. Each machine has its own set of instructions based on the design of its CPU or of its microprocessor. The microprocessor design engineer selects combinations of bit patterns and gives a specific meaning to each combination by using electronic logic circuits; this is called an instruction. The main advantage of using an assembly language for programming is the efficiency of the machine language program resulting from it.

1.5

Terminal Exercises
1. Find the 2s complement of the following numbers: (i) 1101, (ii) 1100 (iii) 0100, (iv) 0110. 2. Find the 1s complement of the following numbers: (i) 0101, (ii) 11011 (iii) 0000, (iv) 1111. 3. Convert the following decimal numbers to equivalent binary numbers: (i)38, (ii) 41, and (iii) 89. 4. Convert the following hexadecimal numbers to equivalent decimal numbers: (i) 5D, (ii) 76, (iii)5A3 5. Convert the following decimal numbers to equivalent hexadecimal numbers: (i) 129, (ii) 109, (iii) 76, (iv) 99. 6. Convert the following binary numbers to equivalent hexadecimal numbers: (i) 101010, (ii) 1101101, (iii) 1011000 (iv) 10010111.

1.6

Supplementary Materials
1. Ramesh S. Goankar ,Microprocessor Architecture & Applications, Third Edition , Penram international Publishing (India),1997. 2. Mathur, Introduction to Microprocessors, Third Edition, TMH.

1.7

Assignments
1. Discuss the advantages and applications of Number systems. 2. A Case Study: Computer Languages and its different types.

Page 46

Computer Architecture and Microprocessor 1.8 Suggested Reading/Reference Books/Set Books


1. Introduction to Microprocessor, Software, Hardware programming , L.A.Levanthal. 2. Fundamentals of Microprocessor, B.Ram, Dhanapat Rai & Sons

1.9

Learning Activities
1. Collect information on Number system. 2. Collect research reports and information on computer languages.

1.10 Keywords
Binary Number System Hexadecimal Number System Binary Coded Decimal (BCD) ASCII

Page 47

You might also like