You are on page 1of 14

COMPARISON OF SPEED-UP TECHNIQUES FOR

SIMULATION

Poul E. Heegaard,
Norwegian Institute of Technology,
N-7034 Trondheim, Norway

Abstract
Due to increased complexity and extreme quality of service requirements,
traditional means for performance evaluation of telecom networks do not
always suffice. The most flexible approach has been, and still is, simulation,
which is the main focus of this paper. However, severe problems regarding
efficiency exist, e.g. evaluation of ATM cell loss rate in range of <10-9 which
is very computer intensive and hardly feasible within reasonable time with
direct simulation. This calls for new and more efficient simulation tech-
niques. A short overview of some speed-up approaches is given.
This paper compare three of the most promising techniques within the rare
event provoking approach; importance sampling, RESTART and transition
splitting. They have all demonstrated significant speed-ups over direct simu-
lation. An experiment is carried out by the three techniques on the same
system example, and the comparison measure takes into account both the var-
iability in the estimates and the CPU time consumption.

1 INTRODUCTION
During recent years we have experienced a rapid evolution of telecom net-
works with increasingly heavy requirements to the quality of service (QoS).
Additionally, the network/system increases in size and complexity, and the
technical evolution moves the bounds for operational speed increasingly
higher.
Traditionally, three main approaches to evaluation applies. First, the analytic
model approach which can be very efficient, but it requires a high level of
abstraction, considerable effort and skills, and good system knowledge to
make a tractable and realistic model. The complexity of telecom systems
makes this a formidable and in many cases unattainable task without unreal-
istic assumptions.
Second, the simulation model approach is more flexible than analytic models
in the sense that arbitrary level of details are allowed. Computations (simula-
tions) are though normally more demanding. In systems with strict QoS

-1-
requirements and high network performance, the situation is even worse. We
have to deal with an enormous number of events for every observation of
interest to the performance measure. The service degradation (e.g. lost traffic
or system failure) is the rare event in this context.
The third and direct approach is measurement on a network/system. This
requires that the system (or at least a prototype) exists, which is a considera-
ble cost and may put strong limitations on the flexibility of setting up the
experiment. However, in a controlled laboratory environment the flexibility
is larger, e.g., at Telenor R&D they have measurement equipment [1] for gen-
eration and measurement of ATM traffic [2].
It is well known that to validate figures in the range 10-9 – 10-11, extremely
long measurement or simulation periods are required. Hence, the “rare-
events” represent a serious problem for the dimensioning and QoS. However,
even though the traditional approaches do not apply, there exist a variety of
improvements to the simulation approach that aim at reducing the time con-
sumption required for a simulation experiment to obtain accurate results. Any
method which reduces the simulation run time is referred to as a speed-up
simulation technique.
Section 2 gives a brief overview of the three main speed-up approaches in the
literature. Then, in Section 4, three rare event provoking techniques are pre-
sented. Results and discussion of the comparison follows in Section 5, and
finally some closing remark on the observations and on directions for further
work are given.

2 OVERVIEW OF SPEED-UP TECHNIQUES


In this paper, a speed-up technique refers to any means to get more informa-
tion out of an experiment, either by
1. Parallel or distributed processing: The number of events is increased by
distributing the simulation process on several processors. The speed-up
factor is normally less than the number of processors, even with “smart”
strategies for handling interactions [3, 4, 5, 6]. A better way to exploit a
multiprocessor environment is to distribute several independent replicas of
the same process on different processors, e.g. a cluster of workstations
[3,7,8].

-2-
correlated outputs
variance minimizing
control variables

Variance stratified sampling


reduction
importance sampling rare event provoking

RESTART

decomposition
Hybrid
technique
conditional expectation

process distribution
Parallel/
distributed
speed-up simulation replica
techniques

Figure 1 Speed-up technique overview, from [11].

2. Hybrid technique: It combines the flexibility of simulation with the com-


putational efficiency of analytic models. Hybrid techniques have reported
substantial computational saving when applied to communication net-
works [9].
3. Variance reduction: Variance minimizing exploits the known correlation
between different observators to reduce the variance of the estimates [10].
However, this will not increase the number of observations like the rare
event provoking techniques do.
In [11] these techniques are discussed in more detail. See also [3,9] for sur-
veys and Figure 1 for an overview.

3 ILLUSTRATIVE SYSTEM EXAMPLE


A simple illustrative example is introduced to describe and make a relative
comparison of the speed-up techniques. The reference system has a well
known and simple solution, see e.g.[12], and the simulation study in this
paper is, obviously, for illustration purpose only.

-3-
Consider a queueing system (M/M/1/N) as in the following figure:
µ−1
λ

N-1 queueing size

Up to N customers can be in the system at the same time. The performance


measure of interest is the probability of having a full system (i.e. N simulta-
neous customers). If this probability is very small, it causes problems in
evaluation by direct simulation.
A model of the M/M/1/N queueing system can be illustrated by a Markov
state diagram, see Figure 2. A state is the current number of customers in the
system.
λ λ λ λ

0 1 2 N

µ µ µ µ
Figure 2 State diagram description of the model of the reference system.

Using regenerative simulation [13], state 0 is the regenerative state. A regen-


erative cycle is the sequence of events (arrivals and service completions)
between two visits to the regenerative state, also called sample path or
trajectory.
Table 1 includes the parameter settings for the 2 cases used in this section,
and key results such as the probability of full system, P(N) .
Table 1 Parameters and key results for the 2 cases

Case N λ µ Cycle timea P(N)b

I 6 0.05 1 21.053 7.4219x10-10


II 85 0.8 1 6.25 9.2634x10-10
a. The average time from leaving state 0 until next time state 0 is left.
b. Probability of observing the system in state N.

4 RARE EVENT PROVOKING


Of all approaches in Figure 1, it is only the rare event provoking that gives
means for manipulating the generating processes, that is to increase the sam-
pling of the rare events that are of vital importance to the performance
measure.

-4-
The next three subsections present three techniques in more detail. In the fol-
lowing, the sample path (see Section 3) is a sequence of events (arrivals and
service completions) in a cycle, denoted X . The accumulated sojourn times
of visited states in this path is a function f(X) of the sample path and denoted
C . The observation of time in state N , is also a function g(X) of the sample
path, and this is denoted Y . The quantity of interest θ is the probability of
state N which is estimated as the ratio between Y and C .

4.1 STRATIFIED SAMPLING


In stratified sampling, see e.g. Chapter 11.5 in [10], the outcome X is strati-
fied (partitioned) into L strata S l with probability p l . X is denoted an
intermediary variable which should be correlated to the observator of interest
Y , see introduction to this section for interpretation.
Given a suitable stratification, a fixed number n l of samples are taken from
each of the L strata. Hence, the estimator given that the samples were taken
from S l is Y l = 1 ⁄ n l ∑j l= 1 Y lj , and then finally
n

∑l = 1 pl Yl
L
Y = (1)

is the stratified estimator for E(Y) .


The main problems with this strategy are first to define a stratification that
reduces the variance, and second to calculate the corresponding probability
p l . See Chapter 11.5 in [10] for further details.
Recently, it is published a variant of stratified sampling called transition split-
ting [14]. This technique is extremely efficient on models like an M/M/1
queueing model used as a illustrative reference example, see Section 3. The
reason is that it uses all available exact knowledge and leaves very little to be
simulated. Details can be found in [14].
Generally, it is very hard to find a suitable transition splitting and to calculate
the probability of each stratum. In fact, in our reference example it is more
demanding to calculate the strata probability than to find the exact state
probabilities.

4.2 IMPORTANCE SAMPLING


Importance sampling (IS) is a technique for variance reduction in computer
simulation where the sampling of the outcomes are made in proportion to

-5-
their relative importance on the result. For a general introduction see for
instance [10] Chapter 11.
The method has been applied to several areas, e.g. both dependability [15,16]
and teletraffic [17,18,19] assessment. In [20] IS is proposed as a method to
provoke cell losses by a synthetic traffic generator like the STG developed in
the PARASOL project, see [21].
Consider an observation Y as a function g(X) of the sample X stemming
from the cumulative distribution F X(x) , see introduction to this section for
interpretation. This observation Y gives rarely non-zero contribution to the
quantity of interest, θ , but it is nevertheless of great importance. The basic
idea of importance sampling is to introduce a new distribution F X∗(x) , where
sampling X from F X∗(x) instead of F X(x) gives non-zero observations more
frequently.
The expected value of Y , θ = E(Y) =

f X(u)
E X(g(x)) = ∫ g(u) ⋅ dF X(u) = ∫ g(u) ⋅ ------------- dF X∗(u) = E X∗(Λ(x) ⋅ g(x)) (2)
f X∗(u)

where the term Λ(u) = f X(u) ⁄ f X∗(u) is denoted the likelihood ratio, which is
the ratio between the probability of u under the original distribution and the
corresponding probability under the new distribution. In regenerative simu-
lation this is the ratio between the sample path probabilities of the original
and new distributions. Eq. (2) requires that f X∗(u) ≠ 0 if g(u)f X(u) ≠ 0 .
1
The variance of (2) is --- ∫ ( g(x) ⋅ Λ(x) – θ ) dF X∗(x) , see e.g. [15]
2
n
Hence, the variance is minimized when

f X(x) ⋅ g(x)
f X∗(x) = ------------------------
- (3)
θ

However, (3) requires exact knowledge of θ , which is the quantity to be esti-


mated. In a system simulation there will be an infinite number of discrete X
which makes it infeasible to compute f X(x) . Nevertheless, (3) indicates
guidelines for choosing the new distribution by observing that it should be
proportional to the most likely samples ( f X(x) ) that give the largest contribu-
tion to θ ( g(x) ).

-6-
According to (2) θ can be estimated either by taking samples Y from F X(x)
(direct simulation) or Y ⋅ Λ from F X∗(x) . An unbiased estimator for the latter
is:

1 1
Y = --- ∑ g(X i) ⋅ Λ(X i) = --- ∑ Y i ⋅ Λ(X i) (4)
n ∀n n ∀n

The results produced by application of importance sampling is reported to be


very sensitive to the parameters involved in chosing the new
distribution F X∗(x) . To the author’s knowledge, optimal parameters (biasing)
only exists for very limited “classes” of systems, such as the reference system
in Section 3. For these, according to results from rare event theory [22], an
interchange of the transition probabilities will minimize the variance of the
estimator in (4) [17].
Importance sampling in dependability simulation applies heuristics to define
the best possible change in parameters. In the literature we find a number of
approaches, such as failure path estimation [23], minimized empirical vari-
ance [24], balanced failure biasing [25], and failure distance biasing where
the parameters are changed adaptively during the simulation [26].

4.3 RESTART
Another importance sampling concept is the RESTART (REpetitive Simula-
tion Trials After Reaching Thresholds), first introduced at ITC’13 by Villén-
Altamirano [27]. The basic idea is to sample the rare event from a reduced
state space where this event is less rare. The probability of this state space is
estimated by direct simulation, and by Bayes formulae the final estimate is
established.
Consider a rare event Y and an intermediate less rare event I . RESTART uses
regenerative simulation, and exploits the fact that Y is a less rare event when
starting from I instead of 0 . By Bayes formula, which says

pY = pY I ⋅ pI (5)

it is realized that the probability of Y can be calculated by the probability of


Y starting from I , combined with the probability of I . Figure 3 shows an
example of the use of RESTART on an M/M/1/N-queueing model.
Estimation of p Y by n = n I + n Y I regenerative cycles goes as follows:

-7-
state I
λ λ λ λ
estimating the
probability of the 0 1 2 N
intermediate state
(here: state 2)
µ µ µ µ

λ λ
RESTART sub-chain, starting from 2 N
the intermediate point (here: state 2)

µ µ
Figure 3 The RESTART method applied on M/M/1/N queueing model
from Figure 2 (λ < µ).

• n Y I cycles produce the p̂ Y I estimate,


• n I cycles produce the p̂ I estimate,
and the relation in (5) gives the final estimate.
Introducing T I and T Y I as the mean sojourn time in state I and Y given start
in I , respectively, and C and C I as mean cycle times for start in state 0 and
I , respectively, the final estimator is:

TI TY I
p̂ Y = p̂ Y I ⋅ p̂ I = ---- ⋅ -------- (6)
C CI

The variance, Var(p̂ Y) , of this estimator is found by applying results


2  2
from [28]. First, the variance Var(p̂ I) = S I ⁄  n I ⋅ C  is needed where

2 2 TI 2 TI 2
S I = S T + ---- ⋅ S C – 2 ⋅ ---- ⋅ S T ⋅ C
C C I

2
and S i is the estimated variance of i , e.g. T . Var(p̂ Y I) is found correspond-
ingly, and hence because p̂ Y I and p̂ I are independent:

2
Var(p̂ Y) = Var(p̂ Y I)Var(p̂ I) + Var(p̂ Y I)E(p̂ I) 2 + E(p̂ Y I) Var(p̂ I) (7)

-8-
To minimize this variance, the optimal intermediate point I and n I ⁄ n Y I -ratio
must be found. For this purpose, the limited relative error (LRE) is proposed
to control the variability [29]. In the comparison study of Section 5, LRE is
used to find the optimal I and to distribute the fixed total number of regener-
ative cycles n = n I + n Y I on cycles estimating p̂ I and p̂ Y I .
The RESTART inventors have recently extended their method to a multi-
level RESTART [30] simply by introducing several intermediate point.

5 COMPARISON

5.1 RESULTS
The simulation experiments are carried out for the same fixed number of
regenerative cycles for all techniques, where the CPU time consumption
t CPU and the variance of sample mean S 2 ⁄ n were registered. Both are
included in the measure of efficiency, m , defined for comparison of the three
techniques.

m = t CPU ⋅ S 2 ⁄ n (8)

This measure is used in the relative comparison in Tables 2 and 3.


Table 2 All-to-all relative comparison of Case I (N=6, λ/µ = 0.05), i.e.
mj/mi where i,j=direct, RESTART, TransSplit, IS balanced, IS reversed.
All results stemming from 20 000 regenerative cycles (direct: 100 000 000).
COMPARE, i
Direct RESTARTa TransSplit IS balanced IS reversed
REATIVE TO, j

Direct 1 277.4 171 707 155 26.9 4 089.8


RESTART 4.0x10-3 1 619 364 9.7x10-2 14.7
TransSplit 5.8x10-9 1.6x10-6 1 1.6x10-7 2.4x10-5
IS balancedb 3.7x10-2 10.3 6 380 461 1 151.9
IS reversed 2.4x10-4 6.8x10-2 42 005 7.0x10-3 1
a. The optimal intermediate point is Iopt= 2.
b. Balanced parameterization means that the arrival and departure probabilities are equal (here 0.5).

5.2 OBSERVATIONS
Several interesting observations were made from the presented results in
Tables 2 and 3. First of all it is important to observe that all three techniques

-9-
Table 3 All-to-all relative comparison of Case II (N=85, λ/µ = 0.8), i.e.
mj/mi where i,j=RESTART, TransSplit, IS balanced, IS reversed.
All results stemming from 20 000 regenerative cyclesa
COMPARE, i
RESTARTb
REATIVE TO, j

TransSplit IS balanced IS reversed


RESTART 1 134 024 0.4 37.8
TransSplit 7.0x10-3 1 3.0x10-3 0.3
IS balanced 2.3 309.3 1 87.2
IS reversed 2.6x10-2 3.5 1.1x10-2 1
a. Direct simulation with 200 000 000 did not produce any non-zero observations.
b. The optimal intermediate point is Iopt= 44.

gave stable and unbiased results with significant speed-ups over direct simu-
lation, see [11] for details. In fact, direct simulation is not applicable and it is
necessary to invest in an accelerated simulation technique.
From the results it seems obvious that transition splitting (TS) should be cho-
sen because it outranks both RESTART and importance sampling (IS). But,
bear in mind that the system example is a rather simple one, which is analyt-
ically tractable. The TS heavily depend on the ability to calculate the strata
probability. A change in our example, e.g. introducing n different types of
customers instead of a single one, makes the calculation of the strata proba-
bility it no longer tractable, and application of TS is no longer feasible. The
same happens when the arrival processes is changed to non-Poisson.
Turning to importance sampling with optimal parameters this seams rather
promising. However, although an optimal parameterization exists in our sim-
ple system, this is generally not the case. In fact, if we make the same changes
as above, we no longer have a known optimal parameterization. Neverthe-
less, even without optimal parameters, IS will provoke rare events rather
efficient, but must be handled with care.
The IS results are sensitive to the choice of parameters, see e.g. the difference
in efficiency for reversed and balanced parameterization. In fact, balanced IS
is worse than RESTART while reversed is much better. Parameters far from
optimal, may produce very wrong estimates, see Figure 4 adopted from [31].
The third method, RESTART is not as effective as optimal IS and TS when
the λ ⁄ µ ratio is low because the intermediate point might be a rare event as
well. The difference is less when the ratio become higher. The multi-level
RESTART proposed in [30] is expected to improve this because several inter-

- 10 -
5e-07

Estimated Blocking Probability 1e-07

5e-08
“stable” interval

1e-08
theoretical value

5e-09

Bias parameter

Figure 4 Illustration of how sensitive the estimates are of changing the


bias-parameter, figure adopted from [31].
mediate points will be used. However, the effect will probably be most viable
where the number of states is large, such as M = 85 .
RESTART will also experience significant problems in the multi-dimensional
state space, this time with defining the intermediate points, i.e. how to reduce
the state space to make rare events less rare.

6 CLOSING REMARKS
Speed-up simulation techniques seem necessary in assessment of traffic and
dependability aspects of current and future telecom systems and network.
From the number of publications it seems to be and increasing understanding
and interest in this topic, e.g. se ACTS proposal TASK AC316 where devel-
opment of a special purpose HW simulator was proposed.
The techniques are not yet mature and ready for practical applications. They
are both technically challenging and containing some nasty pitfalls (we have
been into some of them). Rare event provoking techniques seems most effi-
cient to produce significant speed-ups over direct simulation. A comparative
study was therefore carried out for three of these techniques.
RESTART is a robust technique that clearly improves the speed-up factor
when increasing queuing length of the M/M/1-queue example, but both opti-
mal importance sampling and transition splitting show significantly higher
speed-ups. Multi-level RESTART is assumed, to some extent, to level out this
difference.

- 11 -
Transition splitting is extremely efficient for the simple M/M/1 queueing sys-
tem. However, TS will generally suffer from severe problems in obtaining the
correct splitting and its corresponding probabilities.
Importance sampling is robust and efficient in the M/M/1 queue where an
optimal importance biasing is known. Unfortunately, optimal parameters are
generally not known. Previous results have shown that importance sampling
is very sensitive to the choice of biasing parameters and that a non-optimal
regime may give estimates that are more than one order of magnitude wrong.
The results in this paper show that for a non-optimal parameterization such
as balanced transition probabilities, IS may be worse than RESTART while
optimal IS is much better.
All three methods will, as commented, experience problems in the multidi-
mensional state space. The importance sampling is the most flexible and most
promising regarding multi-dimensional state spaces. Currently I study appli-
cation of results from rare event theory to hopefully establish an optimal
parameterization of IS in a multi-dimensional model.
Finally, note that the three methods are not mutual exclusive. IS may easily
be combined with both RESTART and transition splitting. How to combine
RESTART and transition splitting is not obvious. As an example of practical
use, we have experience from use of
• parallel simulation replicas: replications of the simulation experiment is
distributed on several processors. This is strongly recommended and easy
to carry out if a cluster of workstations are available.
• importance sampling
• control variables, see e.g. [10]
which were successfully combined in a proposed measurement/simulation
technique for testing of ATM-equipment, see [21].

References
[1] B. E. Helvik, O. Melteig, and L. Morland. “The synthesized traffic generator;
objectives, design and capabilities.” In Integrated Broadband Communica-
tion Networks and Services (IBCN&S). IFIP, Elsevier, April 20-23 1993. Co-
penhagen, Denmark, Also available as STF40 A93077.
[2] T. Ormhaug. “Plans for a broadband laboratory at Norwegian Telecom Re-
search.” Internal document F91/u/304, -305, Norwegian Telecom Research,
March 1991. In Norwegian.
[3] J. F. Kurose and H. T. Mouftah. “Computer-aided modeling, analysis, and de-

- 12 -
sign of communication networks.” IEEE Journal on selected areas in commu-
nications, 6(1):130 – 145, January 1988.
[4] D. R. Jefferson. “Virtual time.” ACM Transaction on Programming Lan-
guages and Systems, 7(3):404 – 425, July 1985.
[5] P. Heidelberger and D. M. Nicol. “Parallel simulation of Markovian queuing
networks.” In Proceedings of the Second International Workshop on Model-
ling, Analysis, and Simulation of Computer and Telecommunication Systems
(MASCOT’94), pages 35 – 36. IEEE Computer Society Press, January 31 –
February 2 1994.
[6] B. Lubachevsky and K. Ramakrishnan. “Parallel time-driven simulation of a
network using a shared memory MIMD computer.” In D. Potier, editor, Mod-
elling Tools and Techniques for Performance Analysis. North-Holland, 1985.
[7] Y.-B. Lin. “Parallel independent replicated simulation on networks of work-
stations.” In Proceedings of the 8th Workshop on Parallel and Distributed
Simulation (PADS’94), pages 71 – 81. IEEE Computer Society Press, July 6-
8 1994.
[8] P. E. Heegaard and B. E. Helvik. “A technique for measuring cell losses in
ATM systems combining control variables and importance sampling.” Tech-
nical Report STF40 A92138, SINTEF DELAB, October 1992.
[9] V. S. Frost, W. Larue, and K. S. Shanmugan. “Efficient techniques for the
simulation of computer communications networks.” IEEE Journal on select-
ed areas in communications, 6(1):146 – 157, January 1988.
[10] P. A. W. Lewis and E. J. Orav. Simulation Methodology for Statisticians, Op-
eration Analysts and Engineers, volume I. Wadsworth & Brooks/Cole Ad-
vanced Books & Software, 1988.
[11] P. E. Heegaard. “Speed-up techniques for simulation.” Telektronikk, 1995. To
be published.
[12] L. Kleinrock. Queueing Systems, Theory, volume I. John Wiley, 1975.
[13] S. Lavenberg. Computer Performance Modeling Handbook. Academic Press,
1983.
[14] A. A. Gavioronski. “Transition splitting for estimation of rare events with ap-
plication to high speed data networks.” In Labatoulle and Roberts [32], pages
767 – 776.
[15] A. E. Conway and A. Goyal. “Monte Carlo Simulation of Computer System
Availability/Reliability Models.” In Digest of paper, FTCS-17 - The seven-
teenth international symposium on fault-tolerant computing, pages 230 –235,
July 6 - 8 1987.
[16] V. F. Nicola, M. K. Nakayama, P. Heidelberger, and A. Goyal. “Fast simula-
tion of dependability models with general failure, repair and maintenance
processes.” In Proc. 20’th International Symposium on Fault-Tolerant Com-
puting, pages 491 – 498, 1990.
[17] S. Parekh and J. Walrand. “Quick simulation of excessive backlogs in net-
works of queues.” IEEE Trans. on Auto. Control, 34(1):54–66, 1986.
[18] G. Kesidis and J. Walrand. “A review of quick simulation methods for

- 13 -
queues.” In International Workshop on Modeling, Analysis and Simulation of
Computer and Telecommunication Systems (MASCOTS’93], San Diego, Cal-
ifornia, USA, January 17 - 20 1993.
[19] I. Norros and J. Virtamo. “Importance sampling simulation studies on the dis-
crete time nD/D/1 queue.” In The 8’th Nordic Teletraffic Seminar, pages
VI.3.1 –.16. Tekniska Høgskolan i Helsingfors, Aug. 29 - 31 1989.
[20] B. E. Helvik. “Technology for measurement of rare events in ATM systems.”
Working Paper AN91130, SINTEF DELAB, 1991.
[21] B. E. Helvik and P. E. Heegaard. “A technique for measuring rare cell losses
in ATM systems.” In Labatoulle and Roberts [32], pages 917–930.
[22] J. A. Bucklew. Large Deviation Techniques in Decision, Simulation, and Es-
timation. Wiley, 1990.
[23] R. M. Geist and M. K. Smotherman. “Ultrahigh Reliability Estimates
Through Simulation.” In Annual Reliability and Maintainability Symposium,
pages 350 – 355, Jan. 1989.
[24] M. Devetsikiotis and J. K. Townsend. “Statistical optimization of dynamic
importance sampling parameters for efficient simulation of communication
networks.” IEEE/ACM Transactions on Networking, 1(3):293 – 305, June
1993.
[25] A. Goyal, P. Shahabuddin, P. Heidelberger, and P. Glynn. “A unified frame-
work for simulating markovian models for highly systems.” IEEE Transac-
tion on Computers, 41(1):36 – 51, 1992.
[26] J. A. Carrasco. “Failure distance based simulation of repairable fault-tolerant
computer systems.” In G. Balbo and G. Serazzi, editors, Proceedings of the
Fifth International Conference on Computer Performance Evaluation. Mod-
elling Techniques and Tools, pages 351 – 365. North-Holland, Feb. 15-17
1991.
[27] M. Villén-Altamirano and J. Villén-Altamirano. “RESTART: A method for
accelerating rare event simulation.” In C. D. Pack, editor, Queueing Perform-
ance and Control in ATM, pages 71 – 76. Elsevier Science Publishers B. V.,
June 1991.
[28] R. B. Cooper. Introduction to Queueing Theory. North Holland, New York,
NY, 2nd edition, 1981.
[29] F. Schreiber and C. Görg. “Rare event simulation: a modified RESTART-
method using the LRE-algorithm.” In Labatoulle and Roberts [32], pages 787
– 796.
[30] M. Villén-Altamirano et al. “Enhencement of the accelerated simulation
method RESTART by considering multiple thresholds.” In Labatoulle and
Roberts [32], pages 797 – 810.
[31] Q. Wang and V. S. Frost. “Efficient estimation of cell blocking probability for
ATM systems.” IEEE/ACM Transactions on Networking, 1(2):230 – 235,
April 1993.
[32] J. Labatoulle and J. Roberts, editors. The 14th International Teletraffic Con-
gress (ITC’14), Antibes Juan-les-Pins, France, June 6-10 1994. Elsevier.

- 14 -

You might also like