Professional Documents
Culture Documents
The input to the NOT gate is inverted i.e the binary input state of 0 gives an output of 1 and the binary input state of 1 gives an output of 0. is known as "NOT A" or alternatively as the complement of The truth table for the NOT gate appears as below .
0 1
1 0
The output from the AND gate is written as (the dot can be written half way up the line as here or on the line. Note that some textbooks omit the dot completely). The truth table for a two-input AND gate looks like
0 0 1 1
0 1 0 1
0 0 0 1
It is also possible to represent an AND gate with a simple analogue circuit, this is illustrated as an animation.
The OR gate
The OR gate has two or more inputs. The output from the OR gate is 1 if any of the inputs is 1. The gate output is 0 if and only if all inputs are 0. The OR gate is drawn as follows
The output from the OR gate is written as The truth table for a two-input OR gate looks like
0 0 1 1
0 1 0 1
0 1 1 1
).
where the small circle immediately to the right of the gate on the output line is known as an invert bubble. The output from the NAND gate is written as (the same rules apply regarding the placement and appearance of the dot as for the AND gate - see the section on basic logic gates). The Boolean expression reads as "A NAND B".
0 0 1 1
0 1 0 1
1 1 1 0
The output from the NOR gate is written as The truth table for a two-input NOR gate looks like
0 0 1 1
0 1 0 1
1 0 0 0
The output from the XOR gate is written as The truth table for a two-input XOR gate looks like
0 0 1 1
0 1 0 1
0 1 1 0
and
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
0 1 1 0 1 0 0 1
circuit would comprise an XOR gate, the output of which feeds into the input of a NOT gate. In general, an XNOR gate gives an output value of 1 when there are an even number of 1's on the inputs to the gate. The truth table for a 3-input XNOR gate below illustrates this point. The XNOR gate is drawn using the same symbol as the XOR gate with an invert bubble on the output line as is illustrated below
The output from the XNOR gate is written as The truth table for a two-input XNOR gate looks like
0 0 1 1
0 1 0 1
1 0 0 1
and
0 0 0 0 1
0 0 1 1 0
0 1 0 1 0
1 0 0 1 0
1 1 1
0 1 1
1 0 1
1 1 0
The circuit has two outputs labelled SUM and CARRY. A truth table for the circuit looks like: A 0 1 0 1 0 0 1 1 B 0 1 1 0 SUM 0 0 0 1 CARRY
Cleary this circuit is performing binary addition of B to A (recalling that in binary 1+1 = 0 carry 1). Such a circuit is called a half-adder, the reason for this is that it enables a carry out of the current arithmetic operation but no carry in from a previous arithmetic operation. A full adder is made by combining two half-adders and an additional OR-gate. A full adder has the carry in capability (denoted as CIN in the diagram below) and so allows cascading which results in the possibility of multi-bit addition. The circuit diagram for a full adder is given below, note that the two separate half-adders are each enclosed in a box to help understand this circuit.
The final truth table for a full adder looks like the following: A 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 B 0 1 0 1 0 1 0 1 CIN 0 1 1 0 1 0 0 1 S 0 0 0 1 0 1 1 1 COUT
Addition is a very fast operation and multi-number addition can be performed simply by successively adding each new number to the running summation.
The circuit has two outputs labelled DIFF and BORROW. A truth table for the circuit looks like: A 0 1 0 1 0 0 1 1 B 0 1 1 0 DIFF 0 0 1 0 BORROW
Cleary this circuit is performing binary subtraction of B from A (A-B, recalling that in binary 0-1 = 1 borrow 1). Such a circuit is called a half-subtractor, the reason for this is that it enables a borrow out of the current arithmetic operation but no borrow in from a previous arithmetic operation. As in the case of the addition using logic gates, a full subtractor is made by combining two half-subtractors and an additional OR-gate. A full subtractor has the borrow in capability (denoted as BORIN in the diagram below) and so allows cascading which results in the possibility of multi-bit subtraction. The circuit diagram for a full subtractor is given below.
The final truth table for a full subtractor looks like: A 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 B 0 1 0 1 0 1 0 1 BORIN 0 1 1 0 1 0 0 1 D 0 1 0 0 1 1 0 1 BOROUT
For a wide range of operations many circuit elements will be required. A neater solution will be to use subtraction via addition using complementing as was discussed in the Binary Arithmetic topic. In this case only adders are needed.
Answer
12 bits = 212 = 4096 therefore the ADC can measure 4096 different values of voltage (from 0 to 4095 inclusive), the number of voltage steps is thus 4095 (one fewer than the number of different values available). Assuming that we set digital 0 to be equivalent to 0V and digital 4095 to be equivalent to 25V then each voltage step is simply given by: 25V / 4095 = 0.006105V = 6.105mV
Answer
A B C D Y=A+B+C+D 0 1 x x x 0 0 x x 1 x x 1 x x 0 x x x 1 1 0 0 0 0
_______
Answer
The gate in the circuit is a NAND gate and so gives 0 at the output when all the inputs are 1 and gives 1 at the output otherwise. Therefore the output will be: a=0;b=1;c=1;d=0;e=1;f=0;g=1
The SR Flip-Flop
Consider a circuit comprising two NOR gates as illustrated below
here R and S are known as the external inputs, Q is known as the output or external output and Q' is known as an internal input. Q' is called the state of the system or state variable and is related to Q, R and S via
To investigate the behaviour of the circuit we develop a truth table assuming that the feedback loop is open circuit (i.e. Q' is an external input). The corresponding truth table is then given by S 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 R 0 1 0 1 0 1 0 1 Q 0 1 0 0 1 1 0 0 Q'
When the feedback loop is closed this forces Q=Q'. For those instances where Q=Q' in the truth table above then nothing changes when feedback is applied and so the circuit is said to be stable. In those cases where Q and Q' are different then the application of
feedback causes the inputs to change (even though R and S have remained the same) and so the circuit is said to be unstable and a new output is generated. The circuit stability is indicated in the truth table below where S=Stable, U=Unstable and the number corresponds to a stable state number. For example S 4 means stable state 4, U 3 means this is an unstable state which, upon the application of feedback, will become stable state 3. S 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 R 0 1 0 1 0 1 0 1 Q 0 1 0 0 1 1 0 0 Q' Stability State S S S U U S S U 1 2 3 3 4 4 5 5
The stability conditions are summarised in a flow table where each circled number represents a stable condition.
In general for flow tables columns are labelled with external inputs and rows are labelled with internal states.
SR=00 stable state 1 [Q=0] SR=10 switch to unstable state 4 switch to stable state 4 [Q=1] SR=01 switch to stable state 2 [Q=1] SR=01 switch to unstable state 3 switch to stable state [Q=0] Therefore, if S is taken to 1 (SET condition) then the output Q is set to 1. Q is subsequently held at 1 regardless of what happens to S (HOLD condition) until the input R is taken to 1 (RESET condition). When R=1 then the output Q is cleared back to 0. The condition SR=11 is prohibited for the reasons discussed below. This is the action of a SET-RESET FLIP-FLOP (SRFF) or one-bit memory element.
In practise flip-flops usually provide two outputs i.e. Q the standard output discussed above and its complement as is illustrated above. The latter is not to be confused with the internal state variable Q'.
The SRFF can therefore be constructed either of two NOR gates plus feedback or two NAND gates plus feedback as shown in the circuit diagrams below. In the case of the NAND version it should be noted that the flip-flop is drived by the complements of S and R and so is driven by 0s rather than 1s.
Switch Debouncing
In the circuit below when the switch is in position (a) then = +5V and = 0V i.e. Similarly, when the switch is in position (b) then = 0V and = +5V i.e. = 1 and = 0 and =0 =1
In reality, when the switch moves from (a) to (b) (or vice versa) then two things happen for a brief moment both and are 0. The switch may "bounce", that is make and break the connection a number of times before settling correctly on the contact. This problem can be solved by connecting to the S input and to the R input of an
SRFF. In this case, once has made the transition from 0 to 1 this is equivalent to the Set feature of the SRFF. As has been seen, once this has happened no matter how many times S toggles between 0 and 1 then the output remains at 1 until R is set to 1. Hence once the first contact is made the output remains stable despite the subsequent bounces of the switch and so the circuit behaves as intended. This is illustrated in the timing diagram below.
Bearing in mind that the NAND implementation of an SRFF is driven by 0s then it can be seen that the extra two NAND gates in front of the standard SRFF circuitry mean that the circuit will function as a usual SRFF when S or R are 1 and the clock pulse is also 1 ("high"). Therefore this flip-flop is synchronous. Specifically, a 0 to 1 transition on either of the inputs S or R will only be seen at the output if the clock is 1. An example timing diagram is given below.
where the two asynchronous inputs, PRESET and CLEAR enable the flip-flop to be set to a predetermined state, independent of the CLOCK. Note the invert bubble on these lines which indicates that these lines are normally held at 1 and that the function (CLEAR or PRESET) is performed by taking the line to 0. The delay flip-flop transfers whatever is at the external input D to the output Q. This does not happen immediately however and only happens on an rising clock pulse (i.e. as CLK goes from 0 to 1). The input is thus delayed by up to a clock pulse before appearing at the output. This is illustrated in the timing diagram below. The DFF is an edge-triggered device which means that the change of state occurs on a clock transition (in this case the rising clock pulse as it goes from 0 to 1).
here the function of the asynchronous inputs can clearly be seen, taking PRESET momentarily to 0 sets Q=1 and taking CLEAR momentarily to 0 sets Q=0. The delay flip-flop can also be configured from a JK flip-flop where the input connected to J and the complement of the input is connected to K.
assuming the inital state of CLK=0 and Q=0 then it follows that since is connected to then D=1. As seen above then whatever is at D is transferred to Q at the next rising clock pulse so, as CLK goes from 0 to 1 Q becomes 1 and so D becomes 0. At the next rising
CLK pulse the input at D (which is 0) is transferred to Q and so Q becomes 0 and hence D=1, etc. etc. This cycle is illustrated in the timing diagram below.
It can be seen that for every two clock pulses in then there is only one clock pulse out, the circuit is therefore performing division by 2. It should be noted that this behaviour only takes places when the clock pulses are reasonably short (but at least long enough for the output to change state). If the clock pulse is long then oscillation may occur.
The JK Flip-Flop
The JK flip-flop is an SRFF with some additional gating logic on the inputs which serve to overcome the SR=11 prohibited state in the SRFF. A simple JKFF is illustrated below
The SR=11 is not allowed in this configuration because both and are fed back, one into each of the AND gates. Since each AND gate requires all inputs to be 1 to give an output of 1 then clearly it is impossible for both of these AND gates to be 1 at the same time and so S and R cannot both be 1. It should be noted that the circuit above is just one implementation of a JKFF. Another can be formed using the NAND gate version of the SRFF as illustrated in the lower circuit in the section Design of the SR flip-flop. In this case, since the SR inputs are complemented i.e. driven by 0 instead of 1, then the input gating logic would require NAND gates in place of the AND gates in the circuit above. In this case the full circuit, including the two asynchronous inputs for PRESET and CLEAR appears as below
A truth table can be developed for the output Q at time t (before a clock pulse) and at time t+1 (after a clock pulse), this is given below (clearly, the complement of Q). J 0 0 1 1 0 0 1 1 0 0 0 0 1 1 1 1 K 0 1 0 1 0 1 0 1 Qt 0 1 1 1 0 0 1 0 Qt+1 output is just the
The final two lines in the truth table represent oscillation between the two states on each rising CLK pulse. As was the case with the Delay flip-flop this results in division by two of the incoming CLK pulse as long as the clock pulse is short, otherwise oscillation may occur. This behaviour can be summarised as follows J not equal to K takes the value of J on CLK pulse takes the value of K on CLK pulse J=K=0 all transitions inhibited ("no change") J=K=1 binary divider
The master-slave flip-flop is essentially two back-to-back JKFFs, note however, that feedback from this device is fed back both to the master FF and the slave FF. Any input to the master-slave flip-flop at J and K is first seen by the master FF part of the circuit while CLK is High (=1). This behaviour effectively "locks" the input into the master FF. An important feature here is that the complement of the CLK pulse is fed to the slave FF. Therefore the outputs from the master FF are only "seen" by the slave FF when CLK is Low (=0). Therefore on the High-to-Low CLK transition the outputs of the master are fed through the slave FF. This means that the at most one change of state can occur when J=K=1 and so oscillation between the states Q=0 and Q=1 at successive CLK pulses does not occur.
Ripple Counters
Both the Delay flip-flop and the JK flip-flop both enable a pulse train at the input to be divided by two. If these flip-flops as cascaded together it follows that division by 4, 8, 16, etc. can take place. In general, for n cascaded flip-flops then division by 2 n is possible. The following circuit comprises 4 JKFFs cacsaded such that the Q output from each flipflop forms the clock input to the following flip-flop.
Note here that the invert bubbles on the clock inputs mean that the flip-flops trigger on the falling edge of each clock pulse. For all of the 4 JKFFs the J and K inputs are count enabled i.e. held at 1. Assuming an initial state where all outputs are 0 it is possible to develop a truth table for the four outputs Qa,Qb,Qc and Qd on successive clock pulses. This is given below. Qd 0 0 0 0 0 0 0 0 0 0 1 1 Qc 0 0 1 1 0 0 Qb 0 1 0 1 0 1 Qa
0 0 1 1 1 1 1 1 1 1
1 1 0 0 0 0 1 1 1 1
1 1 0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1 0 1
So, on successive clock pulses the output from the four JKFFs is exactly the same as the pure binary coded representation of the decimal numbers 0 to 15. Here, Q a is weighted by 1, Qb is weighted by 2 and so on. Such a device is known as a ripple counter or a modulo16 (mod-16) counter.
When the count reaches 1010bin then Qb = Qd = 1 and so the output from the NAND gate changes from 1 to 0. In this case CLEAR goes from 1 to 0 and causes all of the Q outputs to be reset to 0. At the same time the NOT gate provides a binary 1 to indicate the CARRY condition out of the counter. Once Qb = Qd = 0 then the output of the NAND gate returns to 1 and the count can restart.
diagram below for a 4 stage ripple counter being fed with a clock frequency of 7.7 MHz and a tpd of 65ns.
In this case, clearly the ripple counter is being driven too fast (specifically, the CLK pulse is too fast for the counter). One way of avoiding this is to always ensure that the clock period (TCLK) is longer than the total propagation time of the counter, this can be expressed as TCLK>N*tpd where N is the number of stages in the ripple counter.
The operation of this ripple counter is as follows: All CLK inputs are wired in parallel;
J and K of FF1 are tied to 1; As previously, FF triggering on the falling edges are assumed; J and K of FF2 are tied to QA, therefore the state of QA(FF1) determines whether or not FF2 changes state (toggles); If QA=0 before CLK then QB(FF2) does not toggle, if Q A=1 before CLK then QB(FF2) toggles; the FF3 inputs are fed from Q A and QB via an AND gate and so QC (FF3) toggles only when QA=QB=1 before CLK; Similarly, QD (FF4) is arranged such that it toggles only when QA=QB=QC=1; This arrangement leads to all output states toggling together in a synchronous manner.
Numeric Representation
Number Systems
The 0s and 1s present in the logic circuits discussed in the course can be used to represent real data inside logic circuitry in, for example, microprocessors. To do this a binary format has to be adopted. The usual practise is to use so-called pure binary coding whereby each binary digit (either 0 or 1) carries a certain weight according to its position in the binary number. So, for example 110100 = 1x25 + 1x24 + 0x23 + 1x22 + 0x21 + 0x20 = 32 = 52 The same approach applies to non-integral numbers so, for example 110.101 = 1x22 + 1x21 + 0x20 + 1x2-1 + 0x2-2 + 1x2-3 = 4 = 6.625 These examples illustrate binary to decimal conversion. To convert a fractional decimal number to binary then the procedure to follow is first divide the number at the decimal point and treat the two parts separately. For the integer part then repeatedly divide it by 2 and store the remainder until nothing is left. The remainders when reverse-ordered gives the first part of the binary number. The reverse-ordering comes about since the first division by 2 gives the least significant bit (lsb) and so on until the last division which gives the most significant bit (msb). For the fractional part repeatedly multiply by 2 and record the carries i.e. when the resulting number is greater than 1. Repeat this process until the desired precision is achieved. An full example of this technique is given in the Solved Problems. A useful way of expressing long pure binary coded numbers is by the use of hexadecimal numbers i.e. base 16. This is because each group of four bits (called a nibble since 2 + 2 + 0 + 0.5 + 0 + 0.125 + 16 + 0 + 4 + 0 + 0
nibbles make a byte!) can be converted into one hexadecimal number. The mapping between binary, decimal and hexadecimal (hex.) numbers is shown below. Decimal Binary Hex Decimal Binary Hex 0 0000 0 8 1000 8 1 2 3 4 5 6 7 0001 0010 0011 0100 0101 0110 0111 1 2 3 4 5 6 7 9 10 11 12 13 14 15 1001 1010 1011 1100 1101 1110 1111 9 A B C D E F
To convert a binary number into its hexadecimal equivalent first ensure that the binary number has a number of digits that is a multiple of 4, if not add zeros to the left hand side of the number until it does. Then split the number into nibbles and convert each nibble into its hexadecimal counterpart.
Binary Arithmetic
Here the rules for standard arithmetic for binary numbers are dicsussed
Binary Addition
Binary addition is completely straightforward and is done in the same way as standard decimal addition remembering that, in binary terms "one plus one equals zero carry one". This is also true for fractional binary numbers as illustrated below. Binary Decimal 101 +110 ____ 1011 5 +6 __ 11 Binary Decimal 1001.1 1100.1 9.5 +12.5 Binary 110.1101 Decimal 6.8125
Binary Subtraction
Binary subtraction usually takes place by complementing i.e. subtraction is via the addition of negative numbers. This technique requires the use of the so-called ones (1's) complement and twos (2's) complement of a binary number. The 1's complement of a binary number is formed simply by complmenting each digit in turn. The 2's complement of a binary number is formed by adding 1 to the least significant bit of the 1's complement (Note in the case of fractional binary numbers this is not the same as adding 1 to the 1's complement number - see below). Decimal Binary 1's Complement 2's Complement 5 00000101 11111010 11111011 27 76 00011011 11100100 01001100 10110011 11100101 10110100
4.625 0100.1010 1011.0101 1011.0110 Note that in order to correctly express the 1's complement and 2's complement binary numbers a fixed length format must be chosen (8-bit in the case above) and leading zeroes must be included when writing the original pure binary format number. Finally, in order to represent a negative binary number the MSB becomes a sign bit i.e. if the MSB=1 then the number is negative, if the MSB=0 then the number is positive, and so, e.g. 00010011 = +19 and 10010011 = -19. This is called true magnitude format.
In order to perform binary subtraction the rules are as follows: When the sum to be performed is A-B then the number to be subtracted (B) is converted to its 2's complement form and then added to A using standard binary addition. If, after the addition, the sign bit = 1 then a further 2 steps must be performed : o first take the 2's complement of the result; o then make the sign bit of the new number equal to 1; o interpret the result in true magnitude format. For sums of the form -A-B then take the 2's complement of A, add it to the 2's complement of B and then proceed as above; Sums of the form -A-(-B) can be converted to B-A before proceeding as above. Examples of binary subtraction using this method can be found in the Solved Problems.
Parity
In any electronic system involving the transfer of data (in the form of binary digits) then data transmission errors are possible. The method of parity is widely used as a method of error detection. An extra bit, known as the parity bit is attached as the least-significant bit to the binary data word (or code group) to be transferred. The new data word to be transmitted (known as the total group) is thus the original code group with the parity bit appended. Two systems are used, namely even parity the value of the parity bit is set such that the total number of 1s in the data word is even
e.g. 11001 which has an odd number of 1s. The new total group is thus 110011 11110 which has an even number of 1s. The new total group is thus 111100 odd parity the value of the parity bit is set such that the total number of 1s in the data word is odd e.g. 11001 which has an odd number of 1s. The new total group is thus 110010 11110 which has an even number of 1s. The new total group is thus 111101 At the receiving end, a check is made on the parity of the whole code to detect an error before stipping the parity off to recover the original data word. Examples of circuits for transmitting (coding) and receiving (decoding) parity-encoded data are given below.
This is a coding circuit for a 4 bit data word ABCD it will create the correct parity bit for transmission with the code group.
This is the decoding circuit for a code group comprising a 4 bit data word ABCD along with a Parity Bit. The Check Bit will equal 0 if even parity has been used and will equal 1 if odd parity has been used. The method of parity does not pinpoint the error, rather it acts as an error flag i.e. indicating an error has occured somewhere. Often the code group has to be re-transmitted when a parity error is detected. Similarly, this method only safeguards against single errors and not multiple errors which could conspire to leave the parity check valid. In order to detect complex errors and to pinpoint where the errors have occured then more sophisticated and complex error-checking algorithms have to be employed - which subsequently requires more data bits to be transmitted for each code group.
Binary Codes
The usual way of expressing a decimal number in terms of a binary number is known as pure binary coding and is discussed in the Number Systems section. A number of other techniques can be used to represent a decimal number. These are summarised below.
BCD for 69
Where the result of any addition exceeds 9(1001) then six (0110) must be added to the sum to account for the six invalid BCD codes that are available with a 4-bit number. This is illustrated in the example below 8 1001 BCD for 8 BCD for 7
BCD for 15
Note that in the last example the 1 that carried forward from the first group of 4 bits has made a new 4-bit number and so represents the "1" in "15".
In the examples above the BCD numbers are split at every 4-bit boundary to make reading them easier. This is not necessary when writing a BCD number down. This coding is an example of a binary coded (each decimal number maps to four bits) weighted (each bit represents a number, 1, 2, 4, etc.) code.
the 1's complement of a 4221 representation is important in decimal arithmetic. In forming the code remember the following rules Below decimal 5 use the right-most bit representing 2 first Above decimal 5 use the left-most bit representing 2 first Decimal 5 = 2+2+1 and not 4+1
Gray Code
Gray coding is an important code and is used for its speed, it is also relatively free from errors. In pure binary coding or 8421 BCD then counting from 7 (0111) to 8 (1000) requires 4 bits to be changed simultaneously. If this does not happen then various numbers could be momentarily generated during the transition so creating spurious numbers which could be read.
Gray coding avoids this since only one bit changes between subsequent numbers. To construct the code there are two simple rules. First start with all 0s and then proceed by changing the least significant bit (lsb) which will bring about a new state. The first 16 Gray coded numbers are indicated below. Decimal Gray Code 0 0000 1 0001 2 0011 3 0010 4 0110 5 0111 6 0101 7 0100 8 1100 9 1101 10 1111 11 1110 12 1010 13 1011 14 1001 15 1000 To convert a Gray-coded number to binary then follow this method 1. The binary number and the Gray-coded number will have the same number of bits 2. The binary MSB (left-hand bit) and Gray code MSB will always be the same 3. To get the binary next-to-MSB (i.e. next digit to the right) add the binary MSB and the gray code next-to-MSB. Record the sum, ignoring any carry. 4. Continue in this manner right through to the end. An example of converting a gray code number to its pure binary equivalent is available in the Solved Problems Gray coding is a non-BCD, non-weighted reflected binary code.
Answer
First we choose to use 8-bit true magnitude format. In this format: +75 = 01001011 ; +42 = 00101010 1. 01001011 +11010110 ________ 00100001 this is +33 in true magnitude format +75 in true magnitude format 2's complement of +42
sign bit is 1 so perform 2's complement 2's complement of last line replace sign bit
3. -75-(-42) = -75+42 = 42-75 and so 00101010 +10110101 ________ 11011111 00100001 10100001 ________ 10100001 -33 in true magnitude format sign bit is 1 so perform 2's complement 2's complement of last line replace sign bit +42 in true magnitude format 2's complement of +75
Answer
As the decimal number has both an integer and a fractional part the problem has to be done in two steps First take the integer part i.e. 57 and repeatedly divide by 2 noting the remainders of each division. 57 / 2 = 28 remainder 1 lsb 28 / 2 = 14 remainder 0 14 / 2 = 7 / 2 = 3 / 2 = 1 / 2 = 7 remainder 0 3 remainder 1 1 remainder 1 0 remainder 1 msb
The binary equivalent of 57dec is therefore given by the remainders ordered from most significant bit (msb) to least significant bit (lsb) and is hence 1110001bin. The fractional part is given by repeatedly multiplying by 2 and storing the carries (when the result of the multiplication exceeds 1) until the required bit accuracy is reached. .4801 x 2 = .9602 + 0 .9602 x 2 = .9204 + 1 .9204 x 2 = .8408 + 1 .8408 x 2 = .6816 + 1 .6816 x 2 = .3632 + 1
Answer
The binary number 1101101101101 has 13 digits. First extend this to a multiple of 4 (i.e. 16) by adding three leading 0s to the number, i.e. 11011011001101 becomes 0001101101101101 Next break the binary number up into nibbles (4-bit groups) and convert each nibble to its hexadecimal equivalent. Binary 0001 1011 0110 1101 Hexadecimal hence 1101101101101bin = 1B6Dhex 1 B 6 D
Answer
The rules of conversion from Gray code to binary are given in the Gray code summary The Gray code number has 8 bits so the Binary equivalent will also have 8 bits The most significant bit (left-most bit) is the same in both cases The next-to-most significant bit in the binary number comes from adding the most significant bit in the binary number to the next-to-most significant bit in the Gray coded number, the sum is noted and any carry ignored. Generally, working from left to right i.e. from most significant bit to least significant bit, then the nth bit (counting from right to left) in the binary number is formed from summing the n+1th bit in the binary number with the nth bit in the Gray coded number. The corresponding sum is noted and any carry ignored. In the case of this example Gray Code 10011011 Binary Digit 1 = 1 Gray Code 10011011 Binary Digit 2 = 0 + 1 = 1 Gray Code 10011011 Binary Digit 3 = 0 + 1 = 1 Gray Code 10011011 Binary Digit 4 = 1 + 1 = 0 (carry 1) Gray Code 10011011 Binary Digit 5 = 1 + 0 = 1 Gray Code 10011011 Binary Digit 6 = 0 + 1 = 1 Gray Code 10011011 Binary Digit 7 = 1 + 1 = 0 (carry 1) Gray Code 10011011 Binary Digit 8 = 1 + 0 = 1 and so (same as Gray code)
10011011gray = 11101101bin
Truth Tables
Often the logical functionality of a gate or a series of gates is illustrated by a truth table. With a truth table all possible combinations of input states are considered and the output value for each of these input states is listed in a table. Examples of truth tables can be found in the descriptions of basic logic gates and other logic gates.
+ +
0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1 0 1 1 1 1 1 1 1
+ +
0 1 0 x 0 x 0 1
1 x
x 1
1 1
Venn Diagrams
Venn diagrams are a useful technique to demonstrate equivalence relationships in Boolean expressions. In a Venn diagram, the binary variables of a function are represented as overlapping areas in a Universe. Complementing or the NOT function is represented as the remainder of the Universe outside a given area. The Venn diagrammatic representations of , , and are illustrated below
Universe
In a Venn diagram the OR function is taken as the combination or union or areas while the AND function is the intersection or common part between two or more overlapping areas. Two functions are said to be equivalent if they define identical areas on a Venn diagram. For an example of the use of Venn diagrams see the Proof of de Morgan's Theorems by Venn diagrams in the Solved Problems at the end of this topic.
de Morgan's Theorems
de Morgan's theorems state that
and
The first equation reads "NOT A AND NOT B EQUALS A NOR B" and the second reads "NOT A OR NOT B EQUALS A NAND B" Proof of these equations by two methods, i.e. by truth tables and by Venn diagrams are available in the Solved Problems at the end of this topic. The general rules for converting a Boolean expression comprised entirely of ORs(ANDs) to to one consisting entirely of ANDs(ORs) is available in the Lecture 3 summary on Boolean Algebra.
by changing to negative coding, (+5V = 0 (False), 0V = 1 (True)) the truth table becomes A 1 1 B 1 0 Y 1 1
1 0
1 0
It is important to recognise here that the circuit has not been changed, only the choice of the representation of the binary numbers. It is also important to note that in switching from positive coding to negative coding the circuit functionality has gone from AND to OR and not, as might be expected, from AND to NAND.
Canonical Forms
The functionality of any logic circuit can be expressed in one of two alternative and equivalent canonical forms. These canonical forms consist of a Boolean algebraic expression. They are generally developed from a truth table. They are The minterm form Known as the first canonical form, this a pure OR combination of minterms where a minterm is an AND function that includes each variable once in its normal or complemented form. The first canonical form is also known as the sum of products. The maxterm form Known as the second canonical form, this a pure AND combination of maxterms where a maxterm is an OR function that includes each variable once in its normal or complemented form. The second canonical form is also known as the product of sums. The construction of these forms is best illustrated by the examples below.
The first canonical form is developed from the output 1's in the truth table. As can be seen, Y is only 1 for the 1st and 4th rows of the truth table. Therefore the minterm (AND function) expressions for these two rows are formed and OR-ed together to give the minterm form for the circuit as
The corresponding circuit would be implemented with AND-OR logic i.e. with the outputs from one or more AND gates being OR-ed together to give the final output.
The second canonical form is developed from the output 0's in the truth table. As can be seen, Y is only 0 for the 1st and 4th rows of the truth table. Developing the maxterm expression here is slightly more complicated and there are two approaches. In the first approach we first develop the minterm expression for the output 0's (not 1's) in the truth table. For the truth table above this will be given by
Then it is necessary to apply the rules of Boolean algebra for converting minterm expressions to maxterm expressions as is described in the Boolean Algebra summary. This leads to the final maxterm form for this truth table of
The second approach allows the maxterm form to be derived directly from the output 0's in the truth table using the following rules. Take each line in the truth table where the output is 0 and Invert the variables (e.g. if is 1 then write etc.) OR these variables together to form the maxterm build the second canonical form from the AND of these maxterms In the case of the truth table above it is possible to go directly to the final maxterm form using this approach. The corresponding circuit would be implemented with OR-AND logic i.e. with the outputs from one or more OR gates being AND-ed together to give the final output.
BOOLEAN ALGEBRA
de Morgans theorems: and In general this may be expressed as: AND is exchanged for OR (and vice versa) each variable is complemented the whole expression is complemented e.g: Order of precedence: expressions in brackets first AND before OR Commutative and associative laws apply, i.e:
Distributive law:
Most Boolean algebra relations fall into pairs each being the dual of the other:
Karnaugh Maps
Karnaugh maps or K-maps are a useful graphic technique to perform the minimization of a canonical form. They utilise Boolean theorems in a mapping procedure which results in a simplified Boolean expression being developed. There are five basic steps in the minimization procedure, these are 1. Develop the first canonical expression (minterm form) from the associated truth table. This has already been described in the section on canonical forms. 2. Plot 1s in the Karnaugh map for each minterm in the expression. Each AND-ed set of variables in the minterm expression is placed in the corresponding cell on the K-map. See below for more information on labelling the K-map. 3. Loop adjacent groups of 2, 4 or 8 1s together. A more detailed summary of looping rules can be found below. 4. Write one minterm per loop, eliminating variables where possible. When a variable and its complement are contained inside a loop then that variable can be eliminated (for that loop only), save the variables that are left. 5. Logically OR the remaining minterms together to give the simplified minterm expression. A detailed example of using Karnaugh maps for circuit simplification is available in the Solved Problems. In the case of simplifying a maxterm expression the steps are very similar with only slight differences due to the OR-AND nature of the maxterm expressions. The steps involved are 1. Develop the second canonical expression (maxterm form) from the associated truth table. This has already been described in the section on canonical forms. 2. Plot 1s in the Karnaugh map for each maxterm in the expression. Each OR-ed set of variables in the minterm expression is placed in the corresponding cell on the K-map. See below for more information on labelling the K-map. Note here you are propogating 0s in the truth table through as 1s in the K-map. 3. Loop adjacent groups of 2, 4 or 8 1s together. A more detailed summary of looping rules can be found below. 4. Write one maxterm per loop, eliminating variables where possible. When a variable and its complement are contained inside a loop then that variable can be eliminated (for that loop only), save the variables that are left. 5. Logically AND the remaining maxterms together to give the simplified maxterm expression.
whilst the correct labelling for a maxterm K-map for the same circuit is
the labelling for 2- and 3-input maps follows logically from this.
and
Answer
For the first expression the relevant truth table is given below, the equivalence between the entries in the red columns prove the first theorem.
.
0 0 1 1 0 1 0 1 1 1 0 0 1 0 1 0 1 0 0 0 0 1 1 1
+
1 0 0 0
The truth table for the second expression is given below, again the two columns proving the theorem are highlighted in red.
+
0 0 1 1 1 0
.
1
0 1 1
1 0 1
1 0 0
0 1 0
1 1 0
0 0 1
1 1 0
and
Answer
Considering the first expression above, the Venn Diagrams for and are
respectively. The Boolean expression requires the logical AND of these two variables which, in terms of a Venn diagram, is given by that part which is common to both diagrams. This is drawn as
For the right hand side of the expression first the logical OR of and is required. In a Venn diagram representation, a logical OR is performed by taking any shaded part of
both of the Venn diagrams. The three diagrams below denote respectively.
and
Finally we require the complement of OR (i.e NOR ), For a Venn diagram, complementing is represented by all those parts of the Universe not populated by the original diagram. Therefore is represented as
In the case of the second expression above the approach is identical. Here
+ +
and
= = +
. .
and so
= = . =
Answer
The first step is to develop the first canconical expression (minterm form) for the truth table, this is given by
Next draw a correctly-labelled 4-input Karnaugh map and populate it with 1s corresponding to the individual minterms in the above expression. This gives
At this stage it is necessary to loop all of the 1s in the K-map. Remember that in order to simplify the Boolean expression as much as possible it is necessary to draw the largest loops possible. It is also necessary to include each 1 in the K-map in at least one loop. A summary of Karnaugh map looping rules is available in the lecture summary. The looped Karnaugh map looks like
The red loop is a loop of 4 i.e. 22 and therefore allows two variables to be eliminated. The loop contains both A and B and their complements and so the resulting minterm is NOT C AND D. The green loop is a loop of 2 and so one variable can be eliminated leaving 3 variables in the minterm. The K-map shows that this loop is independent of D and so the minterm is given by NOT A AND B AND NOT C. The purple loop is again a loop of 4. It contains A, NOT A, D and NOT D and so these two variables can be eliminated. The corresponding minterm is NOT B AND C. The blue loop is totally redundant as all of the 1s in the blue loop are also contained in another loop. Therefore there is no minterm from the blue loop in the simplified minterm form.
Finally the simplified minterm expression is derived by writing one term per loop and OR-ing the final terms together to give
Answer
In the truth table below, A=1 means that A voted in favour of a decision, etc. etc. Y=0 means that the decision is not carried Y=1 means that the decision is carried Y=A means that the decision is carried against A's wishes Y=B means that the decision is carried against B's wishes A B C D 0 0 0 0 0 0 0 0 0 0 1 0 1 1 0 0 1 0 1 0 Y 0 0 0 0 0B A B C D 1 1 1 1 1 0 0 0 0 0 1 0 1 1 0 0 1 0 1 0 Y 0A 1B 1B 1B 1
0 0 0
1 0 1 1 1 1
1 0 1
0B 0B 1A
1 1 1
1 0 1 1 1 1
1 0 1
1 1 1