Professional Documents
Culture Documents
Submitted by:-
Nishtha Wadhawan
MCA-Mtech [1604D]
B34
E3004
Group 2
Submitted to:-
Lect. Richa Malhotra
Part - A
Q1: Design a four bit combinational circuit Incrementer and Decrementer using full
adders.
4-bit combinational circuit using full adder: As increment means adding 1 bit to the
least significant bit of the input.. For getting its increment we have to add one carry to each
of the full hadder.
Full adder increment=A+1
X3 X2 X1 X0
1
A B A B A B
A B
FA FA FA
FA
C S C S C S
C S
C4 S3 S2 S1 S0
X1 1 X2 1 X1 1 X0 1
FA FA FA FA
Q2: Register A holds the 8 bit binary 11011001. Determine the B operand and the logic
micro-operations to be performed in order to change the value in A to (a) 01101101 (b)
11111101
Using 01101101
A 11011001
B 10110100
------------------------
A <- A B 01101101
Using 1111101
A 11011001
B 11111101
------------------------
A<-AV B 11111101
Q3: Starting from initial value of R=11011101, determine the sequence of binary values
in R after a logic shift left followed by circular shift right, followed by a logical shift
right and a circular shift
Part – B
Q4: How vector processors are more efficient than conventional processors in
handling complex numerical operations on large array?
Vector processors:- A vector computer or vector processor is a machine designed to
efficiently handle arithmetic operations on elements of arrays, called vectors. Such machines
are especially useful in high-performance scientific computing, where matrix and vector
arithmetic are quite common. The Cray Y-MP and the Convex C3880 are two examples of
vector processors used today.
A vector computer contains a set of special arithmetic units called pipelines. These
pipelines overlap the execution of the different parts of an arithmetic operation on the
elements of the vector, producing a more efficient execution of the arithmetic operation. Just
as real numbers (float, double) and integers (int, long) can be manipulated, so vectors can be
manipulated, but the operations are quite different.
Q5: Why is micro programmed control better than hardwired? Identify some situations
when hardwired is preferred.
1. If we want any modification or change then we can do easily by updating the micro
programmed in control memory. But in case of hard wired we have to construct all the circuit
again which is very difficult.
For example:
Taking our basic computer as an example, we notice that its four-bit op-code permits up to 16
instructions. Therefore, we could add seven more instructions to the instruction set by simply
expanding its microprogram. To do this with the hard- wired version of our computer would
require a complete redesign of the controller circuit hardware.
2. Another advantage to using micro-programmed control is the fact that the task of designing
the computer in the first place is simplified. The process of specifying the architecture and
instruction set is now one of software (micro-programming) as opposed to hardware design.
wiring may be required since it is faster to have the hardware issue the required control
signals than to have a "program" do it.
typically use for implementing control unit in pure RISC while micro programmed is not
usually used for implementing RISC.
Q6: Parallel processing is a useful concept. But are there any constraints in the
implementation of parallel processing environment? How one can overcome such
constraints?
SMP systems allow us to scale up the number of processors, which might improve
performance of our jobs. The improvement gained depends on how our job is limited:
• CPU-limited jobs. In these jobs the memory, memory bus, and disk I/O spend a
disproportionate amount of time waiting for the processor to finish its work. Running a
CPU-limited application on more processors can shorten this waiting time so speed up
overall performance.
• Memory-limited jobs. In these jobs CPU and disk I/O wait for the memory or the
memory bus. SMP systems share memory resources, so it might be harder to improve
performance on SMP systems without hardware upgrade.
• Disk I/O limited jobs. In these jobs CPU, memory and memory bus wait for disk I/O
operations to complete. Some SMP systems allow scalability of disk I/O, so that
throughput improves as the number of processors increases. A number of factors
contribute to the I/O scalability of an SMP, including the number of disk spindles, the
presence or absence of RAID, and the number of I/O controllers.
Parallel processing can eliminate idle CPU time because the workload is divided among all
CPUs; therefore, the amount of work performed per unit time (the throughput) increases.
However, parallel processing also introduces some overhead into program execution. In
some cases, you may be able to reduce wall-clock time, but at the cost of extra CPU time
which increases because more machine resources are used.
By using parallel processing, we can alleviate some of the following common problems:
Parallel processing introduces some overhead into program execution. This subsection
discusses some of the common types of overhead introduced by parallel processing:
• Processors are sometimes held for the next parallel region to improve efficiency. While
holding a processor can save time, it also costs time to acquire and hold them.
• Multitasked programs require more memory than unitasked programs, and they can
contain more code, more temporary variables, and can require additional stack space.
• Multitasked jobs can be swapped more often, and remain swapped longer, on a
heavily loaded production system.
• Overhead is incurred when slave processors are acquired (on entry to a parallel
region) and at synchronization points within parallel regions. Tests show that the
overhead of executing extra auto tasking code adds a nominal 0% to 5% to the overall
execution time.
• Processors are forced to wait on semaphores during the process of synchronization.
• If inner-loop auto tasking is used, vector performance can decrease because of shorter
vector lengths and more vector loop start-ups.