You are on page 1of 25

Chapter 2: Overview of Digital Logic

Digital Computers use discrete values represented with a finite number of


digits - cannot represent irrational or repeating numbers this way.
Binary numbers as used to represent data. (base 2)
Digital Logic
-allows physical components it process and remember information.
-laws and operations that manipulate information ! Boolean algebra
2.1 The Building
Blocks
Boolean algebra works with true/false values (variables)
Logic gates are operators for Boolean algebra
Truth Tables describe outputs for given input.
2.2 Basic Boolean Operations
COS 140 Text Notes
NOT AND
(Both Statements True)
Or [Inclusive]
(One or Both True)
Because AND and OR
take two inputs, they are
called a binary operators;
NOT, in contrast, is a
unary operator.
2.3 Writing Boolean Expressions
In this book, we will usually use + or _ for or, juxtaposition or ^ for and, and
a line over the variable or expression for not.
2.5 Other Gates
NAND
At least one of the two variables are
false.
"NOT AND"
*Actually
NOR
True whenever OR would be false.
"NOT OR"
XOR
Exclusive OR
The output of this gate is true when
one or the other, but not both, of its
inputs is true.
We can use xor to make an equality
function simply by negating it.
NAND or NOR are themselves sufficient to be the basis of digital logic.
2.6 Truth Tables Revisited
Include all possible combinations of inputs.
As with the more familiar base 10, the least significant digit is on the
right, while the most significant is on the left. The least significant digit
varies the fastest.
With n inputs, there will be 2n rows. - exponential growth
COS 140 Text Notes
2.6 Creating Functions with Boolean Operators
Boolean functions can be created by putting Boolean operators together.
To write the function, we must consider the statement of the problem. We will take an
umbrella (U = 1) if either or both of the following conditions are true: (1) it is raining (R
= 1); or (2) both the weatherman predicted rain (W = 1) and it is cloudy (C = 1).
U = R OR (W AND C) or, in symbols: U = R or (W and C) U = R _ (W ^ C)
2.8 Abstraction
Abstraction is used to hide details that would
otherwise get in the way. We say that the details
have been abstracted away.
Chapter 3: Boolean Algebra
Two circuits are functionally equivalent if, in
every case, they produce the same output given
the same inputs.
3.3 Review
NOT has precedence over
anything else, followed by AND,
then OR and XOR.
3.4 Equivalence with Truth Tables
Show that for all inputs the functions have
the same outputs.
3.5 Proving Equivalence Using
Algebraic Substitution
We can't use truth tables if we dont know what we want to prove
equivalence to, as when we want to simplify an expression.
Substitute one expression for another, equivalent one, using the laws of
Boolean algebra.
COS 140 Text Notes
User View
Assembly Language
Machine Code Level
Firmware
CPU, memory input/output
ALU, Control Unit, Register, ect.
Functional Units
Gates
Electronic
Quantum (Subatomic)
Order of Operations
3.5.1 Laws of Boolean Algebra
AND version and OR version most
laws.
Each step in your proof will need to
be justified by a law of Boolean
algebra.
3.5.4 How to Apply the Laws
We can't use the laws unless they
expression matches the law's pattern
exactly.
Chapter 2: Circuits from Functions
Computer components
(logic circuits) realize
functions
Example: 50 varieties, 1 a month.
Truth Table with Dont Cares
Let each variety be a unique
pattern of 1s in the input
n input lines can represent 2
n

different patterns
How many input lines? 6
64 ! 50 = 14 unused combinations
COS 140 Text Notes
AND Version
OR Version
ABC
(AB)C
XC
CX
C(AB)
CAB
Given.
Precedence, denition of ().
Let X = (AB).
Commutative Law.
Substitution for X.
Precedence, denition of ().
Algebraic Expressions for Circuits
Don't Cares: dont care whether their
corresponding outputs are 1 or 0 - leave
these out or mark with symbol (-)
Gates visited in the order that the
subexpressions are evaluated, as values
travel along lines
from input to output (left to right).
Programmable Logic Array (PLA) is a
chip designed with NOT, AND, and OR
gates so it can handle arbitrary SOP
expressions.
A subarray of ANDs creates product terms
from the inputs.
A subarray of ORs takes input from the ANDs
and creates outputs.
4.4 Creating Boolean Expressions from Truth Tables
Do this by creating a sum of products (SOP)
expression from the truth table. This is a Boolean expression in
which there are one or more expressions, or terms, consisting
inputs or their negations ANDed together (the products), all
joined by ORs (the sum). From the SOP expression, we
can create the circuit diagram in a straightforward way.
The terms in our SOP expression will correspond to rows in the truth
table for which the output is 1. The circuit should produce a 1 whenever any of
these terms produce a 1, so these terms will be ORed together. So, for the
truth table shown in Figure 4.8, there will be three terms, one for each row with
a 1 as output. This is shown in the truth table in the column to the right of the
output (F). The resulting expression is just these termsthe only cases when
the function is trueORed together. Another way of thinking of this would be
that the term is true when A = 1 and B = 1 and C = 1.
COS 140 Text Notes
SOP:
Fuses marked x
are blown.
Commutative
The commutative law allows us to change the position of the operands of particular operators. The law
has to do variables, not compound things (in parenthesis), so substitute X,.
Associative
The associative law allows us to change the order in which they are evaluated, that is, to associate
different ones together rst. This law only applies when the operators are of the same type and when
the two operations appear together in the expression.
Absorption
The Absorption Law capitalizes on this to allow us to turn much larger expressions into a single literal in
that expression.
Null and Identity
The null and identity laws allow us to simplify an expression with variables.
Idempotent
When a value is multiplied by itself and yields the value itself, the value or operation is said to be
idempotent. The Idempotent Law allows us to create (or remove) copies of literals.
Distributive
Allows you to combine operations in a different way. Its called the Distributive Law because one
operator is distributed across another to do the transformation.
Double Negation
However, double negation allows us to add or remove negation by asserting that negating an operation
twice is the same as not negating it at all.
Inverse
The inverse law allows us to simplify an expression when the operands of an operator are the a value
and its inverse.
De Morgan's Law
Allows us to convert between AND and OR operations.
Chapter 5: Karnaugh Maps
A Karnaugh Map is a visual representation of a
Boolean Sum of Products (SOP) expression.
Each term is represented by a cell in a table Adjacent
cells differ in the "sign" of only one variable
Put a 1 in squares that correspond to the terms in the
expression.
Put "don't cares" in Karnaugh maps a D. Include them only in circles to
make bigger groups. Make "don't cares" whatever is helpful.
COS 140 Text Notes
Circle groups of 1s that are size of 2 to some power (1, 2, 4, 8, etc.) until
all 1s have been circled
Not necessary to circle groups of size 1 but they must be included
as terms in the resulting expression.
Circle the largest group possible to cover each 1
Only circle groups of 1s that are size 2^n.
Circles can wrap around the map.
A 1 can be included in more than one group.
There must always be at least one 1 in each group that is
not included in other groups.
Minimized value from a map: one term for each of the circles. For each
circle, a variable will participate in the corresponding term if and only if its
value is the same for all cells that have been circled. A variable can
participate either as it- self or as itself negated.
If two terms differ in only one variable and its complementthat is, the
variable is positive in one term and negative in the otherthen the value of
the variable does not contribute anything to the functions value and can be
omitted.
Gray code - elements differ from their neighbors by
at most one value. 00, 01, 11, 10
Chapter 6: Adders
Combinational Circuits -
outputs determined by the current state of its inputs
Adder - a circuit that computes the sum of binary
numbers
Two input bits are added together to produce a sum
bit plus a carry out bit. For example, if A=1 and B=1,then the sum,102
1
would be represented as S=0 and carry out = 1.
S - low-order (least-significant) C - high-order (most-significant)
Half - Adder - adds, but no carry-in bit.
Binary: Base two, from right.
COS 140 Text Notes
Chapter 7: Parallel Registers
Registers are fast, basic, memory used in the CPU and other places. They
store several bits in parallel, contents don't change until the Data Ready
line is set high (1). Sometimes the registers contents are not immediately
placed on the output lines: the register might have another input line that,
when it is high, will cause the registers to output.
Sequential Circuits: circuits that can
remember their past states, and only
change their state for some values of
their inputs.
SR Latch: An SR latch is one that can be
set (i.e., its state = 1) or reset (state = 0) by
its input lines, and it will hold that state as
long as desired.
A latch is a circuit that for some values
of its input lines changes its state, and
then it holds, or latches, that value until
there is another particular pattern of its
input lines.
Latches and Flip-Flops: building blocks of
registers. They have input and output lines.
They also often have another input line that
determines when the data lines are gated
into the device.
Note there is feedback: some of the outputs (both in this case) feed back
into the inputs to the circuit. This is how the current state reflects the future
state of the device.
In order to analyze a sequential circuit
we use a characteristic table. This
has columns for the input, and a
instead of a output column it has a
column representing its next state,
given its current state and its input(s).
Race Condition: When the answer
depends on which gate is faster and
cannot in general be predicted.
COS 140 Text Notes
Oscillating, Unstable
Chapter 10: RAID: Redundant Array of Independent Disks
- its important to have a reliable memory system so as many requests can
be processed as possible
Magnetic Disk Storage - Direct Access -
data has address but is not RAM
Primitive Unit of a disk - sector
(512bytes)
Access time=Seek time+Rotational latency-not constant timing, like RAM Access time=Seek time+Rotational latency-not constant timing, like RAM
Arm with read/write head
- moving it is 'seek time'
Rotational latency: time for the disk to rotate
to position the right sector under the head
Constant angular velocity: disk spins at constant speed so data is more
dense near center
Type of Disks:
depends on how close
head gets to surface
Closer the head narrower head can be narrower tracks Closer the head narrower head can be narrower tracks Type of Disks:
depends on how close
head gets to surface increased chance of errors (ex. dust) increased chance of errors (ex. dust)
Standard disks: head floats on a cushion
of air does not come in contact
Standard disks: head floats on a cushion
of air does not come in contact
Floppy: head touches the disk when
reading and writing
Winchester: in sealed unit so head can get closer to the disk - no contaminants Winchester: in sealed unit so head can get closer to the disk - no contaminants Winchester: in sealed unit so head can get closer to the disk - no contaminants
Contradict
Each Other,
i.e. bigger
order, slower
request rate
! Transfer capacity - how much data can be read from or
written to the disk in a given amount of time
Contradict
Each Other,
i.e. bigger
order, slower
request rate
! I/O request rate - how many reads or writes can be
accomplished in a given amount of time
RAID - we have several disks in an array. Need to know levels of RAID
Logical Disk -
abstraction
Divide data into segments called strips.
Place strips on disks in a round-robin fashion (first strip to disk 1, second
to disk 2...)
Place strips on disks in a round-robin fashion (first strip to disk 1, second
to disk 2...)
A stripe is all of the strips at the same location on all of the disks. A stripe is all of the strips at the same location on all of the disks.
COS 140 Text Notes
Bus - set of wires used to communicate between devices and CPU. Bus - set of wires used to communicate between devices and CPU.
data bus - transmitting data address bus -transmitting addresses at which data resides.
Many devices can communicate on the same bus, but ther's no privacy on shared lines. A control
scheme is needed to prevent two devices from accessing the bus at the simultaneously and
garbleing the information.
Memory Bus -
connects the
CPU to RAM.
System Bus - main bus in the computer itself. It can be thought of as connecting
the CPU to memory and the various I/O buses. In addition, there are I/O buses and
buses internal to the CPU and between the cache and the CPU.
Width of a Bus - number of lines in a bus. 32-bit computers, buses have 32 lines; 32 bits wide.
To put a 1 on a bus line, one asserts it; to put a 0 on the line, one negates it.
Arbitration - keep more than one device from accessing a bus at the same time. Arbitration - keep more than one device from accessing a bus at the same time. Arbitration - keep more than one device from accessing a bus at the same time. Arbitration - keep more than one device from accessing a bus at the same time.
Centralized Arbitration Centralized Arbitration Decentralized Arbitration Decentralized Arbitration
single arbiter, or controller, for the bus single arbiter, or controller, for the bus together the bus devices determine control together the bus devices determine control
Pros Cons Pros Cons
- simple control rules
- devices themselves
don't do the arbitration,
so they can be simpler
- dedicated arbiter as
part of bus, more cost
- single point of failure,
if arbiter fails the entire
bus fails
- no dedicated arbiters,
save money
- no single point of
failure
- more complex scheme
- each device need
circuitry or software, so
more money
Daisy Chain Bus - formed by hooking devices together with cables from one to the next. This is
in contrast to a bus that is a set of continuous wires that each device plugs into. Daisy-Chaining
is a way to connect computer components
Buses - Communication lines that connect the CPU, the memory and I/O devices.
Centrally-arbitrated daisy chain bus Centrally-arbitrated daisy chain bus
Problem: more than one device can request the
bus, arbiter can't tell which device sent request
Arbiter only decides 'someone can use the bus,'
not which device can use it
Daisy chain buses handle this problem: when there is a request, the arbiter asserts the grant line. The
rst device on the bus notices this. If this device is the one requesting the bus, then it leaves its
outgoing grant line negated and uses the bus. If it doesn't want the bus, it asserts its outgoing grant line.
Daisy chain buses handle this problem: when there is a request, the arbiter asserts the grant line. The
rst device on the bus notices this. If this device is the one requesting the bus, then it leaves its
outgoing grant line negated and uses the bus. If it doesn't want the bus, it asserts its outgoing grant line.
Synchronous Bus - another a line carries a clock signal to the devices and arbiter. Arbitration is occurs
only during some clock cycles, bus use occurs between arbitration phases.
Synchronous Bus - another a line carries a clock signal to the devices and arbiter. Arbitration is occurs
only during some clock cycles, bus use occurs between arbitration phases.
There is still a problem with this setup. Those
devices closer to the arbiter have priority.
There is still a problem with this setup. Those
devices closer to the arbiter have priority.
If one of the devices closer has a high I/O rate, it may assert the request line every time there is an
arbitration phase. Devices downstream that seldom if ever get to use the bus; they starve.
If one of the devices closer has a high I/O rate, it may assert the request line every time there is an
arbitration phase. Devices downstream that seldom if ever get to use the bus; they starve.
COS 140 Text Notes
Hidden arbitration - arbitration selects the next device after the current one is done. so time isn't
wasted. "Hidden" because the arbitration occurs while the bus is in use; there is no obvious arbitration
phase. Needs a third line, acknowledgement (ACK), available in addition to the grant and request lines.
If no one is using the bus, the arbiter asserts the grant line as usual in response to a request; the assertion
propagates down the daisy chain to the device that wants the bus. It begins using the bus, and
simultaneously it negates the request line and asserts the ACK line. When the arbiter notices that ACK
is asserted, it negates the grant line and is again ready to undertake arbitration.
Decentralized Arbitration -Arbiter is trivially replaced with a grant line that is always high (asserted)
and a busy line. When no one wants the bus, the grant line is propagated throughout the daisy chain.
When device i wants the bus, if the busy line is high, it waits. If not, it simply negates its outgoing grant
line, and asserts the busy line. It then begins using the bus. When it is done, it negates the busy line and
again passes the grant line to the next device in the chain. At this point, the next device can take the bus.
Booth's Algorithm - multiplication - can be implemented in hardware
Magnetic Disks - external storage device. Metal disc coated which magnetic polymer
that can be magnetized and demagnetized over very small areas. In other words,
very small areas of the disk can represent a 1 or a 0. Each disk is called a platter.
Magnetic Disks - external storage device. Metal disc coated which magnetic polymer
that can be magnetized and demagnetized over very small areas. In other words,
very small areas of the disk can represent a 1 or a 0. Each disk is called a platter.
The disk drive contains read-write head(s) that sense the state of one of the small
areas (the domains, or bits) or that can write them by changing their magnetic status.
The disk drive contains read-write head(s) that sense the state of one of the small
areas (the domains, or bits) or that can write them by changing their magnetic status.
Data on a disk is laid out in concentric circles, called
tracks. If the disk has multiple platters, corresponding
tracks at a given head position are called a cylinder
Each track is further divided
into sectors. Sectors are
often grouped into blocks.
In standard hard disks, the heads actually float on a
cushion of air caused by the disks rotation, which
keeps the head from actually touching the media.
Constant Angular Velocity -
disk spin at a constant rate,
data more dense at center.
Winchester disk - head and disk in sealed unit, no contaminants, sense smaller bits.
Optical Storage - CD-ROMs and DVD-ROMs.Use a laser to read bits encoded as pits
in plastic. Sectors arranged along a single spiral groove. Access time longer than for
hard disks and capacity is lower.
Access time - time it takes to begin transmitting data from the disk once the disk drive
has received the request from the CPU. Access time = seek time + rotational delay.
Seek time is the time it takes to move the disk drives read/write head to the track with
requested the data, this depends on where the heads were prior to the request.
Rotational delay is the time it takes from the moment when the read/write head is on
the right track until the first data bit requested rotates under the head. This can be
computed based on the rotational speed of the disk:
Tr = (1/2) * (1/R) Rotational Delay = (1/2) * (1/Rotational Speed)
Transfer Rate - how fast data can be transferred from the disk, once the first bit is
under the read/write head.
COS 140 Text Notes
I/O request rate rate - measure of how many requests for data from the disk (or writes
to the disk) can be completed in a given period of time.
RAID (redundant array of independent disks) can be used to optimize transfer rate, I/
O request rate, or both.
-increased reliability. - increase the speed of data access, either by increasing
transfer rate or I/O request rate.
Cons - Cost for additional hardware and, the complexity of the disk control algorithms
and a decrease in storage overall due to the need for error-correcting or error-
detecting codes to be included with the data.
Logical disk - abstraction of the real disks that can be viewed by the user programs
just as a large disk.
Data striping - divide disc data into equal-sized segments called strips.
A stripe consists of all the strips at the same location on multiple disks.
COS 140 Text Notes
RAID 0 RAID - Reliability in Storage RAID 0 RAID - Reliability in Storage
Data striped in round-robin No redundancy of error- correcting mechanisms
Doesn't increase reliability Increased transfer capacity or I/O request rate
No decrease in amount of data
each disc can store
No additional cost (we were going to use these discs
anyway)
RAID 1 RAID 1
Two sets of disks. Sets are duplicates. Data is stored redundantly. Increased reliability. Two sets of disks. Sets are duplicates. Data is stored redundantly. Increased reliability.
Double the number of discs, doubles the cost.
Increased transfer capacity or I/O
request rate. Increased speed.
If a disk is busy, its duplicate can fulfill the request. Writing data is done to both disks, but
the writes are done in parallel, not much impact on speed.
If a disk is busy, its duplicate can fulfill the request. Writing data is done to both disks, but
the writes are done in parallel, not much impact on speed.
RAID 2 RAID 2
Error-correcting Hamming Code stored with
data bits.
Cheaper than RAID 1; less reliable
. This RAID level is not in use. Modern disks provide error correction within the
controller.
. This RAID level is not in use. Modern disks provide error correction within the
controller.
The strips in this scheme are not really strips at alldata is spread throughout the
array at the bit level. The drives are synchronized to be accessed at the same time,
and all of the drives provide, simultaneously, the bits in one chunk of data. The extra
disks provide the error-correcting Hamming code. As we will see later, the number of
bits required for a Hamming code grows roughly as the log (base 2) of the number of
data bits.
The strips in this scheme are not really strips at alldata is spread throughout the
array at the bit level. The drives are synchronized to be accessed at the same time,
and all of the drives provide, simultaneously, the bits in one chunk of data. The extra
disks provide the error-correcting Hamming code. As we will see later, the number of
bits required for a Hamming code grows roughly as the log (base 2) of the number of
data bits.
All the disks must be synchronized, and so there can be not simultaneous access of
multiple strips to increase I/O request rate. Data can be read speedily, since all the
disks are working on the same request simultaneously. Writing is tedious, as each
write requires a recomputation of the Hamming code.
All the disks must be synchronized, and so there can be not simultaneous access of
multiple strips to increase I/O request rate. Data can be read speedily, since all the
disks are working on the same request simultaneously. Writing is tedious, as each
write requires a recomputation of the Hamming code.
Parity Bits - used to detect errors. Some RAID
levels use parity bits to detect errors.
Parity bits record whether the data has an odd or an
even number of 1s in it: i.e., its parity. We add an extra
bitthe parity bitto the data and force the combined
parity of the data plus the parity bit to be either odd or
even by setting the parity bit to be either 0 or 1.
When read data, make sure that the parity matches given.
If not, know that have an error in the data or the parity bit.
In general, dont know where the error is
Parity Bits can't detect even numbers of errors - says its all good
RAID 3 RAID 3
Strips are very small. The discs are synchronized Uses parity bits stored on parity disc
The parity byte is computed by taking the parity of
each of the bits on the other drives.
Increased transfer speed because of
small strips.
Bottleneck: the parity disk. This disk has to be read
each time other disks are read.
Detects single-bit errors.
If a disk does not provide a strip, that strips data bits can be reconstructed from the remaining
data bits and the parity bits. Allows the RAID array to continue even when one disk has failed.
If a disk does not provide a strip, that strips data bits can be reconstructed from the remaining
data bits and the parity bits. Allows the RAID array to continue even when one disk has failed.
The cost of RAID 3 is minimal, only an additional disk drive is needed. So 1/n of the total disk
space is devoted to parity and thus lost to overhead, where n is the number of disks.
The cost of RAID 3 is minimal, only an additional disk drive is needed. So 1/n of the total disk
space is devoted to parity and thus lost to overhead, where n is the number of disks.
RAID 4 RAID 4
parity is computed for the bits in each stripe, and the
parity bits are stored on a dedicated parity drive
strips much large than in RAID 3
The drives do not have to be synchronized to all access the same addresses simultaneously.
This RAID is an independent access array. Multiple drives can simultaneously satisfy requests,
thus speeding up I/O request rate.
The drives do not have to be synchronized to all access the same addresses simultaneously.
This RAID is an independent access array. Multiple drives can simultaneously satisfy requests,
thus speeding up I/O request rate.
Increased Reliability, Transfer rate in unaffected. Cost is same as RAID 3
To write a block, however, the old parity strip must be read rst, modied with the new parity,
then written. Consequently, the parity drive becomes a bottleneck for RAID 4.
To write a block, however, the old parity strip must be read rst, modied with the new parity,
then written. Consequently, the parity drive becomes a bottleneck for RAID 4.
RAID 5 RAID 5
attempts to get rid of RAID 4 bottleneck same strip size, no dedicated parity drive
Parity blocks for stripes are distributed across the drives. Multiple requests can be handled
simultaneously, with parity information coming from multiple disks.
Parity blocks for stripes are distributed across the drives. Multiple requests can be handled
simultaneously, with parity information coming from multiple disks.
Similar benets and costs to RAID 4 Similar benets and costs to RAID 4
Sign-Magnitude Representation:
Magnitude is the absolute value of the number.
Sign bit is 0 if the number is positive and 1 if not.
Problem: Two ways to represent 0, 00000000 and 10000000.
COS 140 Text Notes
Round Robin Striping
Twos complement representation:
Positive number looks exactly like its sign-magnitude form.
Negative numbers, however, are represented in what is known as the twos
complement of the numbers absolute value.
The twos complement of a number is formed by
first taking the ones complement.
(switching zeros and ones)
Ex: the ones complement of
00111111 (6310) is 11000000.
Then the twos complement of the number is
formed by adding 1 to the ones complement.
Ex, 00111111 would be
1+11000000 = 11000001.
A number plus its twos complement add to 0.
To find value of two's complement number:
high-order bit is 0, convert from binary
high-order bit is 1 take the twos complement to find the absolute value, and negate
it. The twos complement of a numbers twos complement
is the number itself.
Booth's Algorithm an
algorithm underlying computer-
based multiplication
Suppose we have the number:
01111100111100002
Using Booths algorithm: (2
15

-2
10
)+(2
8
- 2
4
) = (32,768 - 1024)+(256 - 16)
=31 984
3*4=0011*1110=0011*(2
4
- 2
1
)= (0011* 2
4
) -
(0011* 2
1
) =00110000-00110=101010 = 4210
A block of k 1s, starting at bit n of the number,
is: 2
n
+2
n-1
+2
n-2
+...+2
n-k+1
Booth's Algorithm Cycles of shifting, or addition or subtracting and shifting. Cycles of shifting, or addition or subtracting and shifting.
Compare least signicant bit of column Q with the content of column Q-1 Compare least signicant bit of column Q with the content of column Q-1 Compare least signicant bit of column Q with the content of column Q-1
Least Signicant of Q is
matches Q-1, just shift.
Least Signicant of Q is 0,
Q-1 is 1, Add M
Least signicant of Q is 1,
Q-1 is 0, Add -M
http://www.youtube.com/watch?v=MklsYxdukNw http://www.youtube.com/watch?v=MklsYxdukNw http://www.youtube.com/watch?v=MklsYxdukNw
COS 140 Text Notes
8 - Bit Representation
of a Non-Negative
Binary Number
1 1 1 1 1 1 1 1
Bit 7 Bit 6 Bit 5 Bit 4 Bit 3 Bit 2 Bit 1 Bit 0
Architecture - how a computer, or part of a computer is organized
Assembly Language: low level language used to control CPU
A Von Neumann computer is a stored program computer,
which means that its programs and data are stored in
memory. A program can write data to be executed as
code; or program can be examined by another
program as data. Von Neumann machines also
have a single thread of control.
They are doing one instruction at a time.
fetch an instruction from memory > decode it > execute it
CPU: where programs are actually carried out; where computations take place; all
information that flows between the memory and the I/O either goes through the CPU or
is controlled by it. Composed of connected functional. There is an arithmetic logic unit
(ALU) that actually carries out computations. There are registers, which are high-speed
memory for CPU operations. There is a control unit in charge of fetching and
interpreting instructions and causing the rest of
the CPU to carry out the right tasks.
Memory: internal storage or external.
Internal storage: in CPU or on
motherboard. (CPU registers, cache, RAM)
External storage: memory accessed as input/output by the CPU (discs, hard
drives, flash drives ect.)
Although computers can operate on bytes, most operations and most memory
accesses are in terms of larger units of memory called words.
The size of a word is CPU-dependent and has to do with the width of (i.e., number of
bits in) the data bus, which defines the basic unit of memory access. same size as its
data bus. Thus a 32-bit machine can address directly 2
32
bytes.
Some memory can only be accessed sequentially: to access byte n, all bytes from 0 to
n-1 must first be accessed. Some memory is random access: each unit of storage has
a unique address and can be directly accessed. Access time is constant with RAM.
In between these extremes is direct access. In direct access, an address is used to get
to a general location in the storage device, then sequential search begins from there.
Disks use have this property. Disks are laid out with data in concentric rings known as
tracks. Each track is divided into sectors containing data. access time, then, is not
constant, but rather a function of how long it takes to get the head to the right track (the
seek time) and for the right sector to come underneath the head (the rotation delay).
COS 140 Text Notes
Associative access: finds info in memory based on the information itself. Each word of
an associative memory has some additional bits that are set based on the information in
the word. These bits are a key: they uniquely identify the memory location.
Registers are used wherever speed is paramount and cost is no consideration, since
registers are expensive relative to other memory types.
Cache: any kind of memory where a portion of the information in a slower kind of
memory is stored to increase speed. i.e. Disk cache. Cache is not as expensive as
registers, but it is still expensive.
Data on external storage is cached in main memory. Data in
main memory is cached in cache. Data in the cache is cached in registers.
Locality of reference: data or memory locations that are used are likely to be used
again soon. Caching can often be enhanced by speculatively caching memory
locations: when there is a reference to a memory location, the CPU may bring in that
location to cache along with some of its surrounding locations. If there is locality of
reference, then the chances are good that the next location will already be in the cache.
CPU and memory are the central portion of the computer, I/O devices are peripheral
Secondary storage devices: CD-ROM
drives, DVD drives
Interactive input devices: keyboards,
mice, tablets, touchscreens, etc.
Display devices: displays, printers,
virtual walls, caves (in which all surfaces
of a room function as display devices),
and multi-touch
I/O device = peripheral device: any piece
of hardware to which the memory or CPU
send data and from which they receive it.
Network interfaces: allow the computer
to communicate, Ethernet interfaces, Wi-
Fi hardware, and modems.
Sensors: any device that can translate signals in the external world into data for the
computer to process. Ex. thermometers, battery charge sensors, fingerprint readers,
sonar for land robots, and cameras.
Effectors: device that the computer can use to take action in the real world. Examples
include the wheels on a land robot, robot arms, and the thrusters on an UAV.
I/O devices vary in their speed of access and access method. Some devices are purely
sequential, i.e. keyboard, others are direct access, i.e. hard drives.
COS 140 Text Notes
Main memory = RAM
(random-access memory):
cheap, fast memory.
External memory is cheaper
still, which translates to more
storage capacity per dollar.
Character device: reads or writes a single character at a time. Ex. Keyboard
Block device: read and write a block of data at a time. Ex. hard drive reads and writes
blocks of 512 bytes, 1 KB, etc.
Machine instructions tell the CPU what operations are to be performed on what data.
byte: 8 bits, always - used for both internal and external memory
word: addressable unit, size depends on the machine - used for internal memory only
block: unit of transfer from external memory to internal memory
Interrupts alert CPU to something important that has happened so the CPU does
not have to constantly check for each potential situation.
Bus width: number of lines Negate: remove information from a line
Assert: put info on a line Bus arbitration: controls which device gets the bus.
A CPU has three parts: an arithmetic logic unit, a control unit, registers.
Arithmetic logic unit (ALU): where the actual computation takes place within the CPU.
Addition, logical operations such as shifting, etc., happen here.
Registers: used as a short-term, very high-speed memory for the CPU. Operands
being operated on, for example, or the results of operations, might be contained here.
Control unit: responsible for the overall operation of the CPU. It is responsible for
fetching instructions from memory, placing data in registers, telling the ALU what to do.
All data the CPU deals with is comprised of binary strings
Most CPUs have at least these registers:
Program/instruction counter: tracks where in memory is the programs current
execution point. It holds the address of the next instruction to be fetched and executed.
Instruction register holds the current instruction.
Program status word (PSW): contains information about the status of the processor
and the execution of the program.
The CPU understands machine language. The set of all instruction types the CPU can
carry out is called its instruction set.
Instructions are stored in memory as a machine language program. The basic process
of a CPU consists of the fetch-decode-execute cycle.
COS 140 Text Notes
assembly language: a more abstract language made to
represent machine language.
Machine language can be entered into the computers
memory for execution. Assembly language program
must first be translated into machine language, or
assembled, by another program, the assembler.
Assembly language is still a low-level language.
high-level languages: bridge gap between CPU and
human language
High-level languages are compiled to assembly or
machine language by a compiler, or they are interpreted
by an interpreter that reads them and carries out the
program.
Packed decimal: nibble (four-bit chunk) of a byte stores a
binary number representing the decimal digits 09.
Advantage: decimal numbers, even non-integers, can be
stored. Unfortunately, arithmetic is made more difficult.
Floating point numbers: subset of real numbers.
Each character is encoded as a unique binary string. A
very common encoding scheme is ASCII (American
Standard Code for Information Interchange), in which
each character is encoded by a single byte.
2-byte Unicode representation: 65,536 characters can
be represented enough for any human natural language.
String: data type for strings of characters. Not basic
hardware data types, but defined by the programming
language.
CPUs deal with logical data. A basic data type is the logical Boolean, which is one bit.
Most CPU instructions are: data transfer, arithmetic, logical, control transfer, and system control.
Data transfer instructions: used to move data from place to place within the computer.
A data transfer instruction needs operands that describe the source and the destination
of the data to be moved.
Arithmetic instructions: do mathematical operations on data in the ALU or memory.
An arithmetic shift instruction shifts all bits in a register or memory location to
the left or right. If it shifting to the left, then the new low-order bit is set to 0. If it is
shifting to the right, then the new high-order bit is kept the same.
Rotate instructions are similar, except that the bit that is shifted out of the word is
used as the new bit.
COS 140 Text Notes
Masking: way to check a bit by isolating it using logical operations. A mask is applied to
the register to clear all of the bits other than the one we are interested in. At that point,
we can see if the register contains binary 0if it does, then the bit was 0, else it was 1.
One type of control transfer instruction is the branch or jump instruction. These can be
unconditional or conditional. Often, unconditional jumps are called jumps, while
branches are those that are conditional.
An absolute jump specifies the target by an operand that represents the address. A
relative jump has an operand that is taken to be an offset from the current instruction.
Subroutines: piece of code that can be used by the rest of the program, or even other
programs, to perform some function. CPUs provide instructions to transfer control to a
subroutine and to return control when the subroutine is finished to the calling program.
The first is sometimes called jumping to a subroutine, but most often called calling
the subroutine. The latter is called returning from the subroutine.
A stack is a data structure that functions somewhat like a stack of paper. This is called
last in, first out (LIFO) access. It is a good LIFO way to represent recency; the most
recently-used piece of paper is on top.
System control instructions: have to do with accessing or controlling CPU resources.
These instructions are often protected, so some programs can't use them. CPUs usually
have two or more modes they can be in, with some levels having more rights than
others. The operating system runs in the highest, most capable level. This is usually
called kernel mode, since it is the mode in which the OS kernel runs. This allows the
operating system, to limit what user programs have access to. System control
instructions often are used to access protected registers or protected memory. The CPU
can mark a region of memory for no access or read only.
Input/output instructions are usually protected and only executable in kernel mode. This
prevents overwriting by multiple programs.
Addressing modes: ways in which an operand in an instruction can specify where the
data is that is to be operated on, or where data is to be stored.
Immediate addressing mode: the operand is contained within the instruction itself.
One advantage is that there is no additional memory reference needed.
LD R3,#23
Sharp sign denotes immediate mode operand, in this case 2310. The machine
code for this instruction would be:
COS 140 Text Notes
Register addressing mode: an operand is contained in a register, or a register is the
target of an operation.
Advantage: doesn't need a memory access, since the data is already in the
register or can be stored to a register. If commonly-used operands are kept in registers,
then execution speed increase. RISC machines make heavy use of register addressing
and provide large numbers of registers. Advantage: of register addressing is that there
are relatively few registers compared to the number of memory locations, so only a few
bits are needed in the instruction to specify the operand. RISC machines should seek to
keep each instruction the size of a single word in order to speed up the processor.
Direct addressing mode: used when a CPU needs to be able to retrieve from and
store to memory, and the direct addressing mode is used for this. Here, the operand is
the address of the memory location containing (or to contain) the data.
Indirect addressing mode: address that contains not data, but the address at which
data can be found. Increases the flexibility of the instruction set by avoiding having to
hard-code addresses into the program. The cost of this flexibility is, in the case of
indirect addressing, an additional memory location and, hence, an additional memory
access.
Register indirect addressing: like indirect addressing, but the indirect operands
address is found by looking in a register rather than in a memory location.
Instructions are divided into fields, sets of bits that correspond to the opcode, mode
bits, operands, etc.
Program status word should be protected from change, since this register reflects the
status of the processor and ALU operations as determined by the processor itself. The
instruction counter is also protected. General-purpose registers should be unprotected.
Debate about the number and kind of instructions a CPU should provide.
RISC: relatively few and simple instructions, each simple, reduced this philosophy,
reduced instruction set computers
CISC: large number of rather complex instructions, including some very specialized
ones, called complex instruction set computers
RISC machines have several advantages. Since there are relatively few instructions,
its possible to optimize each so they run rapidly. Less complexity means less circuitry,
so less space needed on the chip; so more space on the chip for other things. RISC
machines have a lot of registers- the more operations carried out using registers, the
faster the processing, since memory access is slower. Since the CPU is smaller, its
components are closer together, and faster. All instructions are the same size, so the
entire instruction can be read at once, and often executed within one or a few machine
cycles.
COS 140 Text Notes
The primary drawback: with simple instructions, it takes more of them to do the same
thing. Not only does this counter the speed-up gained by making each instruction faster,
but it also makes executable files bigger, since they have to contain more instructions.
Counter-argument: both memory and disc are cheap, compared to increasing the
processors speed, so it is worth the trade-off.
CISC: instructions tend to be variable length, and a rich set of addressing modes is
supported. A benefit of CISC chips is ease of programming. Also, it is common wisdom
that if something can be done in hardware rather than software, it should be, since that
will drastically increase speed: so, by this reasoning, complex instructions exist to avoid
the need to use several or many simple RISC instructions.
Drawbacks:complexity of the CPU and amount of space needed for the control
unit and ALU, at the expense of registers and cache. The variable-length instructions
increase the complexity of the CPUs fetchexecute cycle, since multiple fetches may
be needed, and the CPU wont know this until it has fetched and partially decoded the
first part of the instruction. Instructions in a CISC chip are not all done in the same
amount of time, nor are they all done at the CPUs clock rate. Multiple clock cycles are
needed to execute the complex instructions.
Variables and Primitive Data types
Variables can change in runtime.
Variables can be in different
types and sizes.
Variables point to a location in
memory.
Variables can have attributes: name, value, type, scope (where it can be seen, local,
global), lifetime (only live within the scope of the variable)
Instance Variables: variables that are tightly associated with objects
Identifier: variable name, way of referring to a variable
Identifiers exist as strings or symbols in a symbol table. Symbol tables make sure that
all references to the same variable refer to the same memory location.
A variable can be accessed by a pointer, another variables that contains its address.
Assignment: assigning a value to a variable.
Constant binding: define numeric constants that dont change
Dynamic scoping: which variable referred depends on the order in variables are used.
Static Binding: set up at load time, unchanged Dynamic Binding: change over runtime of the program
Primitive data type: could be defined as either a data type that is not composed of any
others, or one that is provided by the programming language.
Scalar data types: data type is one that is not composed of any others
Integer: represents positive and negative whole numbers (fixed-point numbers)
COS 140 Text Notes
Floating point numbers: have a fractional part beyond the decimal(radix) point. Can't
represent repeating or irrational numbers.
A problem with ASCII is that it is an 8-bit code. It can represent only 256 characters.
Unicode characters can be longer than 8 bits, depending on the Unicode form used.
Strings: non-scalar data type, strings of characters, example of a compound, or
composite, data type, as they are composed of characters, another primitive type.
Arrays: composite data type, represent groups of related numbers. Stores data
elements one type, and the data elements are stored together in memory. Arrays have
dimensions. A single-dimensional array, a vector, is composed of a set of data
elements that can each be identified by a single index (subscript). Often, the lower
bound of an array index is 0 (that is the first index). This means that the upper bound
for an index, if the array has n elements in that dimension, is n. The starting address of
an array is called the base address of the array. An array is a random access data
structure: all elements can be found in the same amount of time.
Vector - single dimension array
Address = Base Address + (Index - Lower Bound) * Size
A = B + ( I - L ) * S
If we store rows first in a two-dimensional array, this is called row-major order, if we
store the columns first, it is called column-major order.
Label the Indices: x,y,z could be used, but s,r,c are most common (slice, row, column)
Record/ Structure: data type allowing
multiple kinds of elements to be stored.
Enumerated type: specifies a set of values a
variable having the type can contain.
List: an ordered set of things, a list is either implemented as an array, or as a (non-
primitive) data structure called a linked list, in which each item in the list has associated
with it a pointer to the next item.
Sets of things can be represented as arrays, lists, or sequences, but a set is not
ordered, nor do sets usually allow duplicate members.
Chapter 17: Process Synchronization: Semaphores
The problem: synchronizing processes so errors do not occur.
Problem if: (1) processes share a resource and (2) need to use the resource in a way
that is incompatible with its simultaneous sharing with others.
COS 140 Text Notes
Deadlock: processes waiting
for a resource that is in use.
Race Condition: processes share a resource; the outcome
depends on the timing or relative speed of the processes.
Transaction: when a process
accesses shared data.
Critical region: the part of the program that deals with the
transactions
Solution: need to force the clusters of transactions in each process to be executed
serially, not in parallel. But, a CPUs instructions are atomic, that is, uninterruptible;
everything else is not. But we need critical regions, too, to be atomic. Need to enforce
mutual exclusion with respect to each process critical region: neither process can be
in its critical region(its program counter cannot be within the critical region)while
the other process is within its own critical region.
How:
Process synchronization primitives: low level process synchronization
mechanisms. Other more abstract and high-level mechanisms are often built from them.
Interrupt blocking: Not allowed in user-level processes because it could prevent
O.S. from needed tasks. Also, if the counter never exits the critical region, say because
it need to make a CPU call but can't because of the blocks, the process would block the
system from running anything forever.
Spin locks: a process voluntarily blocking itself and waiting if it detects other
processes with a flag. A flag (a lock) is set when a process is in its critical region. But
the lock itself is a shared resource! So... another race condition. Solution: processors
provide instructions to allow atomic access to locks, like the test and set lock (TSL)
which atomically checks the value of a lock variable then sets it. Problem with the TSL:
wasted CPU cycles while processes are waiting and checking the locks.
Semaphores: to avoid spinning, we need a way for a process to give up the
CPU when it is waiting to enter its critical region, and for the operating system not to run
that process again until it can enter its critical region. A semaphore is basically just a
data structure and some subroutines.
Mutex semaphore: simplest kind of semaphore, enforces mutual exclusion.
When a process wants to enter its critical region, it checks if the semaphore is up (free)
or down (in use). If a process calls Down on a semaphore that is already down, then the
semaphore causes the process to block. It is removed from the OSs ready queue and
placed on a queue of processes waiting for the resource, so it is no longer considered
by the scheduler as a possibility to run.
Counting semaphores: used to keep track of the number of resources available
and to block processes when there is more demand than supply.
Producerconsumer: there are two processes, one that simply produces items and
attempts to put them into a shared location of finite capacity, and another that simply
takes the items out of the location. The location is called a buffer and this problem is an
instance of a bounded-buffer problem. We want the producer to wait when the buffer
is full, and we want the consumer to wait when it is empty.
Event counters: count the kinds of events that have happened to determine if a
buffer is full and if a process should block or not.
Monitor: non-primitive, language level synchronization mechanism.A monitor is
something like an object that surrounds and protects a shared resource. It has methods
that are the only way to access the shared resource. Monitors only allow a single
process to be active at once.
COS 140 Text Notes
Goals of virtual memory (VM): to overcome the memory
limitations of a computer to allow more processes to run
simultaneously (increase the degree of multiprogramming)
and to allow large processes to run.
VM keeps part of the program in memory.
VM uses the computers disk(s)that is, secondary storage
to hold the programs that are running.The CPU cant execute
directly from disk, and disks are slower.
Run-time stack: is where parameters and return
addresses get stored when calling subroutines. The gap is an unused portion of
memory. Data grows upwards as the program runs, while the stack grows downward as
the program calls subroutines
Virtual Memory - simply bring into memory the portion of a process memory actually in
use, leaving all the rest on disk. As new parts of are needed, they can be brought in,
possibly replacing those already in memory. Since only a tiny part of each process is in
memory (is resident) at a time, many more processes can share main memory. Also, the
size of a process is limited only by the address space size or the amount of disk, and
the total number of processes is limited only by the size of disk available.
In VM,each process has access to all of its address space. This is called the process
virtual memory. Virtual memory is divided into small pages, usually 24 KB in size.
These are the units of the process that will be moved into main memory as needed.
Physical memorythat is, main memory, or RAMis similarly divided into chunks of
the same size that can hold pages. These are called page frames, since they are
holders for pages.
Demand Paging: pages still on disk are brought into main memory as they are needed
by the process.
COS 140 Text Notes
Mutual exclusion: Resource is either available or assigned to at most one process
Hold-and-wait: Process can hold one resource and then ask for others.
No preemption: Cant take a resource away from a process once assigned
Circular wait: ! 2 processes in circle, each is waiting for resource held by next circle
Formal language
Languages do not even have to have anything to do with words. For example, a
Lindenmayer system, or L-system, is a formal language that is used, among other
things, for modeling plant growth [?]. Here, the lan- guages grammar specifies how
(e.g.) a new branch can bud from an existing one.
So a language really is about describing a set of things: valid plant growth patterns,
search expressions, programs, sentences and sonnets, and so forth. If it is done right
and it may not always be possiblea language specifies those and only those things in
the desired set.
Note that the set of things we describe with languages are often infinite. This is certainly
true of natural language and most useful kinds of formal languages.
All languages have a particular syntax. Syntax refers to the form of something, in this
case, the value ways the basic units of the languagee.g., words in Englishcan be
put together to form larger units, and how those units can be combined, etc. When we
learned grammar in elementary school, we were learning the syntax of our native
language.
Computer and human languages also have another attribute that is im- portant:
semantics. The semantics of a language is what the meaning of
COS 140 Text Notes

You might also like