You are on page 1of 4

AMDAHLS LAW

Amdahl's law, also known as Amdahl's argument, is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors. The speedup of a program using multiple processors in parallel computing is limited by the time needed for the se uential fraction of the program. !or example, if a program needs "# hours using a single processor core, and a particular portion of $ hour cannot be paralleli%ed, while the remaining promising portion of $& hours '&()* can be paralleli%ed, then regardless of how many processors we devote to a paralleli%ed execution of this program, the minimum execution time cannot be less than that critical $ hour. +ence the speed up is limited up to "#,, as the diagram illustrates.

Description
Amdahl's law is a model for the relationship between the expected speedup of paralleli%ed implementations of an algorithm relative to the serial algorithm, under the assumption that the problem si%e remains the same when paralleli%ed. !or example, if for a given problem si%e a paralleli%ed implementation of an algorithm can run $") of the algorithm's operations arbitrarily uickly 'while the remaining --) of the operations are not paralleli%able*, Amdahl's law states that the maximum speedup of the paralleli%ed version is $.'$ / #.$"* 0 $.$12 times faster than the non3paralleli%ed implementation. 4ore technically, the law is concerned with the speedup achievable from an improvement to a computation that affects a proportion 5 of that computation where the improvement has a speedup of 6. '!or example, if an improvement can speed up 1#) of the computation, 5 will be #.17 if the improvement makes the portion affected twice as fast, 6

will be ".* Amdahl's law states that the overall speedup of applying the improvement will be

To see how this formula was derived, assume that the running time of the old computation was $, for some unit of time. The running time of the new computation will be the length of time the unimproved fraction takes, 'which is $ 8 5*, plus the length of time the improved fraction takes. The length of time for the improved part of the computation is the length of the improved part's former running time divided by the speedup, making the length of time of the improved part 5.6. The final speedup is computed by dividing the old running time by the new running time, which is what the above formula does. +ere's another example. 9e are given a task which is split up into four parts: 5$ 0 $$), 5" 0 $-), 51 0 "1), 5; 0 ;-), which add up to $##). Then we say 5$ is not sped up, so 6$ 0 $ or $##), 5" is sped up (,, so 6" 0 (##), 51 is sped up "#,, so 61 0 "###), and 5; is sped up $.2,, so 6; 0 $2#). <y using the formula 5$.6$ = 5".6" = 51.61 = 5;.6;, we find the running time is

or a little less than > the original running time which we know is $. Therefore the overall speed boost is $ . #.;(?( 0 ".$-2 or a little more than double the original speed using the formula '5$.6$ = 5".6" = 51.61 = 5;.6;*8$. @otice how the "#, and (, speedup don't have much effect on the overall speed boost and running time when $$) is not sped up, and ;-) is sped up by $.2,.

5aralleli%ation
In the case of paralleli%ation, Amdahl's law states that if 5 is the proportion of a program that can be made parallel 'i.e. benefit from paralleli%ation*, and '$ 8 5* is the proportion that cannot be paralleli%ed 'remains serial*, then the maximum speedup that can be achieved by using @ processors is

In the limit, as @ tends to infinity, the maximum speedup tends to $ . '$ 8 5*. In practice, performance to price ratio falls rapidly as @ is increased once there is even a small component of '$ 8 5*. As an example, if 5 is &#), then '$ 8 5* is $#), and the problem can be speed up by a maximum of a factor of $#, no matter how large the value of @ used. !or this reason, parallel computing is only useful for either small numbers of processors, or problems with very high values of 5: so3called embarrassingly parallel problems. A great part of the craft of parallel programming consists of attempting to reduce the component '$ / 5* to the smallest possible value. 5 can be estimated by using the measured speedup 6A on a specific number of processors @5 using

5 estimated in this way can then be used in Amdahl's law to predict speedup for a different number of processors.

Belation to law of diminishing returns


Amdahl's law is often conflated with the law of diminishing returns, whereas only a special case of applying Amdahl's law demonstrates 'law of diminishing returns'. If one picks optimally 'in terms of the achieved speed3up* what to improve, then one will see monotonically decreasing improvements as one improves. If, however, one picks non3 optimally, after improving a sub3optimal component and moving on to improve a more optimal component, one can see an increase in return. Consider, for instance, the illustration. If one picks to work on < then A, one finds an increase in return. If, instead, one works on improving A then <, one will find a diminishing return. Thus, strictly speaking, only one 'optimal case* can appropriately be said to demonstrate the 'law of diminishing returns'. @ote that it is often rational to improve a system in an order that is Dnon3optimalD in this sense, given that some improvements are more difficult or consuming of development time than others. Amdahl's law does represent the law of diminishing returns if you are considering what sort of return you get by adding more processors to a machine, if you are running a fixed3si%e computation that will use all available processors to their capacity. Each new processor you add to the system will add less usable power than the previous one. Each time you double the number of processors the speedup ratio will diminish, as the total throughput heads toward the limit of $ . '$ 8 5*. This analysis neglects other potential bottlenecks such as memory bandwidth and I.F bandwidth, if they do not scale with the number of processors7 however, taking into account such bottlenecks would tend to further demonstrate the diminishing returns of only adding processors.

6peedup in a se uential program

Assume that a task has two independent parts, A and <. < takes roughly "() of the time of the whole computation. <y working very hard, one may be able to make this part ( times faster, but this only reduces the time for the whole computation by a little. In contrast, one may need to perform less work to make part A be twice as fast. This will make the computation much faster than by optimi%ing part <, even though < speed3up by a greater ratio, '(, versus ",*. The maximum speedup in an improved se uential program, where some part was sped up p times is limited by ine uality

where f '# G f G $* is the fraction of time 'before the improvement* spent in the part that was not improved. !or example 'see picture on right*:

If part < is made five times faster 'p 0 (*, tA 0 1, t< 0 $ and f 0 tA . 'tA = t<* 0 #.?(, then

If part A is made to run twice as fast 'p 0 "*, t< 0 $, tA 0 1 and f 0 t< . 'tA = t<* 0 #."(, then

Therefore, making A twice as fast is better than making < five times faster. The percentage improvement in speed can be calculated as

Improving part A by a factor of two will increase overall program speed by a factor of $.2, which makes it 1?.() faster than the original computation. +owever, improving part < by a factor of five, which presumably re uires more effort, will only achieve an overall speedup factor of $."(, which makes it "#) faster.

You might also like