You are on page 1of 9

Assignment: - 4

Computer Organization and Architechture

Submitted by:-
Nishtha Wadhawan
MCA-Mtech [1604D]
B34
E3004
Group 2

Submitted to:-
Lect. Richa Malhotra

Part - A
Q1: Design a four bit combinational circuit Incrementer and Decrementer using full
adders.

4-bit combinational circuit using full adder: As increment means adding 1 bit to the

least significant bit of the input.. For getting its increment we have to add one carry to each
of the full hadder.
Full adder increment=A+1

X3 X2 X1 X0

1
A B A B A B
A B

FA FA FA
FA

C S C S C S
C S

C4 S3 S2 S1 S0

4 bit combinational circuit using decrementer


In decrementer, we have to perform the operation A-1, that is, A+2’s complement of
1(i.e.1111)

X1 1 X2 1 X1 1 X0 1

FA FA FA FA

Q2: Register A holds the 8 bit binary 11011001. Determine the B operand and the logic
micro-operations to be performed in order to change the value in A to (a) 01101101 (b)
11111101

Using 01101101
A 11011001
B 10110100
------------------------
A <- A B 01101101

Using 1111101
A 11011001

B 11111101
------------------------
A<-AV B 11111101

Q3: Starting from initial value of R=11011101, determine the sequence of binary values
in R after a logic shift left followed by circular shift right, followed by a logical shift
right and a circular shift

Initial value of R= 11011101

Logical Shift left= 10111010

Circular shift right= 01011101

Logical shift right= 00101110

Circular right shift= 01011100

Part – B

Q4: How vector processors are more efficient than conventional processors in
handling complex numerical operations on large array?
Vector processors:- A vector computer or vector processor is a machine designed to
efficiently handle arithmetic operations on elements of arrays, called vectors. Such machines
are especially useful in high-performance scientific computing, where matrix and vector
arithmetic are quite common. The Cray Y-MP and the Convex C3880 are two examples of
vector processors used today.

The simplest physical interpretation of a vector in 3 dimensional spaces is that, it is an


entity which is described by a magnitude and a direction, which in a 3 dimensional space can
be described by 3 numbers, x, y and z. A vector [a b c] is said to have 3 elements, and is
stored in memory as an array.

A vector computer contains a set of special arithmetic units called pipelines. These
pipelines overlap the execution of the different parts of an arithmetic operation on the
elements of the vector, producing a more efficient execution of the arithmetic operation. Just
as real numbers (float, double) and integers (int, long) can be manipulated, so vectors can be
manipulated, but the operations are quite different.

There are two simple vector operation,

• Scaling - physically stretching or shrinking the length of the vector, is done by


multiplying each element in the vector by the same scaling factor e.g. a*[x y z] == [ax
ay az].
• Addition - creating a new vector by parallelogram addition, is done by adding the
respective first, second and third elements e.g. [a b c] + [x y z] == [a+x b+y c+z].

Advantages of vector processing


• Each result is independent of previous results - allowing deep pipelines and high clock
rates.
• A single vector instruction performs a great deal of work – meaning less fetches and
fewer branches (and in turn fewer mispredictions).
• Vector instructions access memory a block at a time which allows memory latency to
be amortized over many elements.
• Vector instructions access memory with known patterns, which allows multiple memory
banks to simultaneously supply operands.
• Less memory access = faster processing time.
There is a class of computational problems that are beyond the capabilities of a
conventional computer. These problems are characterized by the fact that they require
a vast number of computations that will take a conventional computer days or even
weeks to complete. In many science and engineering applications, the problem can be
formulated in terms of vectors and matrices that lend themselves to vector processing.

Q5: Why is micro programmed control better than hardwired? Identify some situations
when hardwired is preferred.

1. If we want any modification or change then we can do easily by updating the micro
programmed in control memory. But in case of hard wired we have to construct all the circuit
again which is very difficult.
For example:

Taking our basic computer as an example, we notice that its four-bit op-code permits up to 16
instructions. Therefore, we could add seven more instructions to the instruction set by simply
expanding its microprogram. To do this with the hard- wired version of our computer would
require a complete redesign of the controller circuit hardware.

2. Another advantage to using micro-programmed control is the fact that the task of designing
the computer in the first place is simplified. The process of specifying the architecture and
instruction set is now one of software (micro-programming) as opposed to hardware design.

3.Simplifies design of control unit.


4.Micro programmed control is Cheaper than hardwired
control.
5.It is less error-prone
6. It is easy to modify than hard wired.
Diagram of programmed control
Some situations when hard wired is preferred
1.In case of speed: If speed is a consideration, hard-

wiring may be required since it is faster to have the hardware issue the required control
signals than to have a "program" do it.

2.In case of implementing RISK: Hardwired control unit is

typically use for implementing control unit in pure RISC while micro programmed is not
usually used for implementing RISC.
Q6: Parallel processing is a useful concept. But are there any constraints in the
implementation of parallel processing environment? How one can overcome such
constraints?

Parallel processing environments: - All parallel processing environments are categorized as


one of:

• SMP (symmetric multiprocessing), in this type of multiprocessing processors shared


some hardware resources. The processors communicate via shared memory and have
a single operating system.
• Cluster or MPP (massively parallel processing), also known as shared-nothing, in this
type of multiprocessors each processor has exclusive access to hardware resources.

SMP systems allow us to scale up the number of processors, which might improve
performance of our jobs. The improvement gained depends on how our job is limited:

• CPU-limited jobs. In these jobs the memory, memory bus, and disk I/O spend a
disproportionate amount of time waiting for the processor to finish its work. Running a
CPU-limited application on more processors can shorten this waiting time so speed up
overall performance.
• Memory-limited jobs. In these jobs CPU and disk I/O wait for the memory or the
memory bus. SMP systems share memory resources, so it might be harder to improve
performance on SMP systems without hardware upgrade.
• Disk I/O limited jobs. In these jobs CPU, memory and memory bus wait for disk I/O
operations to complete. Some SMP systems allow scalability of disk I/O, so that
throughput improves as the number of processors increases. A number of factors
contribute to the I/O scalability of an SMP, including the number of disk spindles, the
presence or absence of RAID, and the number of I/O controllers.

Parallel Processing Issues:- Parallel processing is a method of splitting a computational task


into subtasks, and then simultaneously performing the subtasks.

Parallel processing can eliminate idle CPU time because the workload is divided among all
CPUs; therefore, the amount of work performed per unit time (the throughput) increases.
However, parallel processing also introduces some overhead into program execution. In
some cases, you may be able to reduce wall-clock time, but at the cost of extra CPU time
which increases because more machine resources are used.

By using parallel processing, we can alleviate some of the following common problems:

• Maximum-memory jobs: - If the memory is occupied by a few large-memory jobs, one


or more of the CPUs might be idle even though there are other jobs to run.
• Dedicated machine: if the computer is running a single job, then all other CPUs are
idle.
• Light workload: if the amount of jobs waiting for a CPU is less than the total number of
CPUs, then one or more of the CPUs becomes idle.

Parallel processing introduces some overhead into program execution. This subsection
discusses some of the common types of overhead introduced by parallel processing:

• Processors are sometimes held for the next parallel region to improve efficiency. While
holding a processor can save time, it also costs time to acquire and hold them.
• Multitasked programs require more memory than unitasked programs, and they can
contain more code, more temporary variables, and can require additional stack space.
• Multitasked jobs can be swapped more often, and remain swapped longer, on a
heavily loaded production system.
• Overhead is incurred when slave processors are acquired (on entry to a parallel
region) and at synchronization points within parallel regions. Tests show that the
overhead of executing extra auto tasking code adds a nominal 0% to 5% to the overall
execution time.
• Processors are forced to wait on semaphores during the process of synchronization.
• If inner-loop auto tasking is used, vector performance can decrease because of shorter
vector lengths and more vector loop start-ups.

You might also like